00:00:00.001 Started by upstream project "autotest-per-patch" build number 132032 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.028 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.029 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.031 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.047 Fetching changes from the remote Git repository 00:00:00.049 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.073 Using shallow fetch with depth 1 00:00:00.073 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.073 > git --version # timeout=10 00:00:00.117 > git --version # 'git version 2.39.2' 00:00:00.117 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.158 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.158 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.943 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.956 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.966 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:02.966 > git config core.sparsecheckout # timeout=10 00:00:02.976 > git read-tree -mu HEAD # timeout=10 00:00:02.993 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:03.013 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:03.013 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:03.101 [Pipeline] Start of Pipeline 00:00:03.112 [Pipeline] library 00:00:03.114 Loading library shm_lib@master 00:00:03.114 Library shm_lib@master is cached. Copying from home. 00:00:03.129 [Pipeline] node 00:00:03.135 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:03.138 [Pipeline] { 00:00:03.148 [Pipeline] catchError 00:00:03.150 [Pipeline] { 00:00:03.161 [Pipeline] wrap 00:00:03.167 [Pipeline] { 00:00:03.173 [Pipeline] stage 00:00:03.175 [Pipeline] { (Prologue) 00:00:03.189 [Pipeline] echo 00:00:03.190 Node: VM-host-SM17 00:00:03.195 [Pipeline] cleanWs 00:00:03.203 [WS-CLEANUP] Deleting project workspace... 00:00:03.203 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.209 [WS-CLEANUP] done 00:00:03.400 [Pipeline] setCustomBuildProperty 00:00:03.489 [Pipeline] httpRequest 00:00:03.892 [Pipeline] echo 00:00:03.894 Sorcerer 10.211.164.101 is alive 00:00:03.901 [Pipeline] retry 00:00:03.902 [Pipeline] { 00:00:03.914 [Pipeline] httpRequest 00:00:03.919 HttpMethod: GET 00:00:03.919 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:03.920 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:03.935 Response Code: HTTP/1.1 200 OK 00:00:03.935 Success: Status code 200 is in the accepted range: 200,404 00:00:03.936 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:05.307 [Pipeline] } 00:00:05.322 [Pipeline] // retry 00:00:05.328 [Pipeline] sh 00:00:05.607 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:05.621 [Pipeline] httpRequest 00:00:06.019 [Pipeline] echo 00:00:06.021 Sorcerer 10.211.164.101 is alive 00:00:06.030 [Pipeline] retry 00:00:06.032 [Pipeline] { 00:00:06.048 [Pipeline] httpRequest 00:00:06.052 HttpMethod: GET 00:00:06.053 URL: http://10.211.164.101/packages/spdk_78b0a6b787152839b13869c12d5cb7221090310b.tar.gz 00:00:06.054 Sending request to url: http://10.211.164.101/packages/spdk_78b0a6b787152839b13869c12d5cb7221090310b.tar.gz 00:00:06.058 Response Code: HTTP/1.1 200 OK 00:00:06.058 Success: Status code 200 is in the accepted range: 200,404 00:00:06.059 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_78b0a6b787152839b13869c12d5cb7221090310b.tar.gz 00:02:38.990 [Pipeline] } 00:02:39.009 [Pipeline] // retry 00:02:39.016 [Pipeline] sh 00:02:39.294 + tar --no-same-owner -xf spdk_78b0a6b787152839b13869c12d5cb7221090310b.tar.gz 00:02:42.637 [Pipeline] sh 00:02:42.930 + git -C spdk log --oneline -n5 00:02:42.930 78b0a6b78 nvme/rdma: Support accel sequence 00:02:42.930 6e713f9c6 lib/rdma_provider: Add API to check if accel seq supported 00:02:42.930 477ec7110 lib/mlx5: Add API to check if UMR registration supported 00:02:42.930 8ee9fa114 accel/mlx5: Merge crypto+copy to reg UMR 00:02:42.930 ce6a621c4 accel/mlx5: Initial implementation of mlx5 platform driver 00:02:42.956 [Pipeline] writeFile 00:02:42.970 [Pipeline] sh 00:02:43.299 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:43.318 [Pipeline] sh 00:02:43.624 + cat autorun-spdk.conf 00:02:43.624 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.624 SPDK_RUN_ASAN=1 00:02:43.624 SPDK_RUN_UBSAN=1 00:02:43.624 SPDK_TEST_RAID=1 00:02:43.624 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.641 RUN_NIGHTLY=0 00:02:43.643 [Pipeline] } 00:02:43.656 [Pipeline] // stage 00:02:43.670 [Pipeline] stage 00:02:43.672 [Pipeline] { (Run VM) 00:02:43.684 [Pipeline] sh 00:02:43.998 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:43.998 + echo 'Start stage prepare_nvme.sh' 00:02:43.998 Start stage prepare_nvme.sh 00:02:43.998 + [[ -n 1 ]] 00:02:43.998 + disk_prefix=ex1 00:02:43.998 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:02:43.998 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:02:43.998 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:02:43.998 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:43.998 ++ SPDK_RUN_ASAN=1 00:02:43.998 ++ SPDK_RUN_UBSAN=1 00:02:43.998 ++ SPDK_TEST_RAID=1 00:02:43.998 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:43.998 ++ RUN_NIGHTLY=0 00:02:43.998 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:02:43.998 + nvme_files=() 00:02:43.998 + declare -A nvme_files 00:02:43.998 + backend_dir=/var/lib/libvirt/images/backends 00:02:43.998 + nvme_files['nvme.img']=5G 00:02:43.998 + nvme_files['nvme-cmb.img']=5G 00:02:43.998 + nvme_files['nvme-multi0.img']=4G 00:02:43.998 + nvme_files['nvme-multi1.img']=4G 00:02:43.998 + nvme_files['nvme-multi2.img']=4G 00:02:43.998 + nvme_files['nvme-openstack.img']=8G 00:02:43.998 + nvme_files['nvme-zns.img']=5G 00:02:43.998 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:43.998 + (( SPDK_TEST_FTL == 1 )) 00:02:43.998 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:43.998 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:43.998 + for nvme in "${!nvme_files[@]}" 00:02:43.998 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:43.998 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:43.998 + for nvme in "${!nvme_files[@]}" 00:02:43.998 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:43.998 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:43.998 + for nvme in "${!nvme_files[@]}" 00:02:43.998 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:43.998 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:43.998 + for nvme in "${!nvme_files[@]}" 00:02:43.998 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:43.998 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:43.998 + for nvme in "${!nvme_files[@]}" 00:02:43.998 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:43.998 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:43.998 + for nvme in "${!nvme_files[@]}" 00:02:43.998 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:43.998 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:43.998 + for nvme in "${!nvme_files[@]}" 00:02:43.998 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:44.264 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:44.264 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:44.264 + echo 'End stage prepare_nvme.sh' 00:02:44.264 End stage prepare_nvme.sh 00:02:44.274 [Pipeline] sh 00:02:44.553 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:44.553 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:02:44.553 00:02:44.553 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:02:44.553 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:02:44.553 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:02:44.553 HELP=0 00:02:44.553 DRY_RUN=0 00:02:44.553 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:02:44.553 NVME_DISKS_TYPE=nvme,nvme, 00:02:44.553 NVME_AUTO_CREATE=0 00:02:44.553 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:02:44.553 NVME_CMB=,, 00:02:44.553 NVME_PMR=,, 00:02:44.553 NVME_ZNS=,, 00:02:44.553 NVME_MS=,, 00:02:44.553 NVME_FDP=,, 00:02:44.553 SPDK_VAGRANT_DISTRO=fedora39 00:02:44.553 SPDK_VAGRANT_VMCPU=10 00:02:44.553 SPDK_VAGRANT_VMRAM=12288 00:02:44.553 SPDK_VAGRANT_PROVIDER=libvirt 00:02:44.553 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:44.553 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:44.553 SPDK_OPENSTACK_NETWORK=0 00:02:44.553 VAGRANT_PACKAGE_BOX=0 00:02:44.553 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:02:44.553 FORCE_DISTRO=true 00:02:44.553 VAGRANT_BOX_VERSION= 00:02:44.553 EXTRA_VAGRANTFILES= 00:02:44.553 NIC_MODEL=e1000 00:02:44.553 00:02:44.553 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:02:44.553 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:02:47.869 Bringing machine 'default' up with 'libvirt' provider... 00:02:48.823 ==> default: Creating image (snapshot of base box volume). 00:02:48.823 ==> default: Creating domain with the following settings... 00:02:48.823 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730730527_888bd41e62faa3427239 00:02:48.823 ==> default: -- Domain type: kvm 00:02:48.823 ==> default: -- Cpus: 10 00:02:48.823 ==> default: -- Feature: acpi 00:02:48.823 ==> default: -- Feature: apic 00:02:48.823 ==> default: -- Feature: pae 00:02:48.823 ==> default: -- Memory: 12288M 00:02:48.823 ==> default: -- Memory Backing: hugepages: 00:02:48.823 ==> default: -- Management MAC: 00:02:48.823 ==> default: -- Loader: 00:02:48.823 ==> default: -- Nvram: 00:02:48.823 ==> default: -- Base box: spdk/fedora39 00:02:48.823 ==> default: -- Storage pool: default 00:02:48.823 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730730527_888bd41e62faa3427239.img (20G) 00:02:48.823 ==> default: -- Volume Cache: default 00:02:48.823 ==> default: -- Kernel: 00:02:48.823 ==> default: -- Initrd: 00:02:48.823 ==> default: -- Graphics Type: vnc 00:02:48.823 ==> default: -- Graphics Port: -1 00:02:48.823 ==> default: -- Graphics IP: 127.0.0.1 00:02:48.823 ==> default: -- Graphics Password: Not defined 00:02:48.823 ==> default: -- Video Type: cirrus 00:02:48.823 ==> default: -- Video VRAM: 9216 00:02:48.823 ==> default: -- Sound Type: 00:02:48.823 ==> default: -- Keymap: en-us 00:02:48.823 ==> default: -- TPM Path: 00:02:48.823 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:48.823 ==> default: -- Command line args: 00:02:48.823 ==> default: -> value=-device, 00:02:48.823 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:48.823 ==> default: -> value=-drive, 00:02:48.823 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:02:48.823 ==> default: -> value=-device, 00:02:48.823 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:48.823 ==> default: -> value=-device, 00:02:48.823 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:48.823 ==> default: -> value=-drive, 00:02:48.823 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:48.823 ==> default: -> value=-device, 00:02:48.823 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:48.823 ==> default: -> value=-drive, 00:02:48.823 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:48.823 ==> default: -> value=-device, 00:02:48.823 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:48.824 ==> default: -> value=-drive, 00:02:48.824 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:48.824 ==> default: -> value=-device, 00:02:48.824 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:49.096 ==> default: Creating shared folders metadata... 00:02:49.096 ==> default: Starting domain. 00:02:50.470 ==> default: Waiting for domain to get an IP address... 00:03:08.562 ==> default: Waiting for SSH to become available... 00:03:08.562 ==> default: Configuring and enabling network interfaces... 00:03:11.091 default: SSH address: 192.168.121.177:22 00:03:11.091 default: SSH username: vagrant 00:03:11.091 default: SSH auth method: private key 00:03:12.995 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:03:21.142 ==> default: Mounting SSHFS shared folder... 00:03:22.517 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:03:22.517 ==> default: Checking Mount.. 00:03:23.454 ==> default: Folder Successfully Mounted! 00:03:23.454 ==> default: Running provisioner: file... 00:03:24.391 default: ~/.gitconfig => .gitconfig 00:03:24.650 00:03:24.650 SUCCESS! 00:03:24.650 00:03:24.650 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:03:24.650 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:03:24.650 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:03:24.650 00:03:24.659 [Pipeline] } 00:03:24.677 [Pipeline] // stage 00:03:24.688 [Pipeline] dir 00:03:24.689 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:03:24.691 [Pipeline] { 00:03:24.706 [Pipeline] catchError 00:03:24.708 [Pipeline] { 00:03:24.721 [Pipeline] sh 00:03:25.004 + vagrant ssh-config --host vagrant 00:03:25.004 + sed -ne /^Host/,$p 00:03:25.004 + tee ssh_conf 00:03:29.196 Host vagrant 00:03:29.196 HostName 192.168.121.177 00:03:29.196 User vagrant 00:03:29.196 Port 22 00:03:29.196 UserKnownHostsFile /dev/null 00:03:29.196 StrictHostKeyChecking no 00:03:29.196 PasswordAuthentication no 00:03:29.196 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:29.196 IdentitiesOnly yes 00:03:29.196 LogLevel FATAL 00:03:29.196 ForwardAgent yes 00:03:29.196 ForwardX11 yes 00:03:29.196 00:03:29.210 [Pipeline] withEnv 00:03:29.213 [Pipeline] { 00:03:29.228 [Pipeline] sh 00:03:29.512 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:29.512 source /etc/os-release 00:03:29.512 [[ -e /image.version ]] && img=$(< /image.version) 00:03:29.512 # Minimal, systemd-like check. 00:03:29.512 if [[ -e /.dockerenv ]]; then 00:03:29.512 # Clear garbage from the node's name: 00:03:29.512 # agt-er_autotest_547-896 -> autotest_547-896 00:03:29.512 # $HOSTNAME is the actual container id 00:03:29.512 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:29.512 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:29.512 # We can assume this is a mount from a host where container is running, 00:03:29.512 # so fetch its hostname to easily identify the target swarm worker. 00:03:29.512 container="$(< /etc/hostname) ($agent)" 00:03:29.512 else 00:03:29.512 # Fallback 00:03:29.512 container=$agent 00:03:29.512 fi 00:03:29.512 fi 00:03:29.512 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:29.512 00:03:29.783 [Pipeline] } 00:03:29.801 [Pipeline] // withEnv 00:03:29.811 [Pipeline] setCustomBuildProperty 00:03:29.827 [Pipeline] stage 00:03:29.829 [Pipeline] { (Tests) 00:03:29.850 [Pipeline] sh 00:03:30.131 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:30.145 [Pipeline] sh 00:03:30.454 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:30.471 [Pipeline] timeout 00:03:30.471 Timeout set to expire in 1 hr 30 min 00:03:30.473 [Pipeline] { 00:03:30.491 [Pipeline] sh 00:03:30.769 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:31.335 HEAD is now at 78b0a6b78 nvme/rdma: Support accel sequence 00:03:31.348 [Pipeline] sh 00:03:31.628 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:31.900 [Pipeline] sh 00:03:32.219 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:32.494 [Pipeline] sh 00:03:32.775 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:03:33.033 ++ readlink -f spdk_repo 00:03:33.033 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:33.033 + [[ -n /home/vagrant/spdk_repo ]] 00:03:33.033 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:33.033 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:33.033 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:33.033 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:33.033 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:33.034 + [[ raid-vg-autotest == pkgdep-* ]] 00:03:33.034 + cd /home/vagrant/spdk_repo 00:03:33.034 + source /etc/os-release 00:03:33.034 ++ NAME='Fedora Linux' 00:03:33.034 ++ VERSION='39 (Cloud Edition)' 00:03:33.034 ++ ID=fedora 00:03:33.034 ++ VERSION_ID=39 00:03:33.034 ++ VERSION_CODENAME= 00:03:33.034 ++ PLATFORM_ID=platform:f39 00:03:33.034 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:33.034 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:33.034 ++ LOGO=fedora-logo-icon 00:03:33.034 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:33.034 ++ HOME_URL=https://fedoraproject.org/ 00:03:33.034 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:33.034 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:33.034 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:33.034 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:33.034 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:33.034 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:33.034 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:33.034 ++ SUPPORT_END=2024-11-12 00:03:33.034 ++ VARIANT='Cloud Edition' 00:03:33.034 ++ VARIANT_ID=cloud 00:03:33.034 + uname -a 00:03:33.034 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:33.034 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:33.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.293 Hugepages 00:03:33.293 node hugesize free / total 00:03:33.293 node0 1048576kB 0 / 0 00:03:33.293 node0 2048kB 0 / 0 00:03:33.293 00:03:33.293 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:33.552 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:33.552 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:33.552 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:33.552 + rm -f /tmp/spdk-ld-path 00:03:33.552 + source autorun-spdk.conf 00:03:33.552 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:33.552 ++ SPDK_RUN_ASAN=1 00:03:33.552 ++ SPDK_RUN_UBSAN=1 00:03:33.552 ++ SPDK_TEST_RAID=1 00:03:33.552 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:33.552 ++ RUN_NIGHTLY=0 00:03:33.552 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:33.552 + [[ -n '' ]] 00:03:33.552 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:33.552 + for M in /var/spdk/build-*-manifest.txt 00:03:33.552 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:33.552 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:33.552 + for M in /var/spdk/build-*-manifest.txt 00:03:33.552 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:33.552 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:33.552 + for M in /var/spdk/build-*-manifest.txt 00:03:33.552 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:33.552 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:33.552 ++ uname 00:03:33.552 + [[ Linux == \L\i\n\u\x ]] 00:03:33.552 + sudo dmesg -T 00:03:33.552 + sudo dmesg --clear 00:03:33.552 + dmesg_pid=5204 00:03:33.552 + [[ Fedora Linux == FreeBSD ]] 00:03:33.552 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:33.552 + sudo dmesg -Tw 00:03:33.552 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:33.552 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:33.552 + [[ -x /usr/src/fio-static/fio ]] 00:03:33.552 + export FIO_BIN=/usr/src/fio-static/fio 00:03:33.552 + FIO_BIN=/usr/src/fio-static/fio 00:03:33.552 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:33.552 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:33.552 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:33.552 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:33.552 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:33.552 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:33.552 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:33.552 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:33.552 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.552 14:29:32 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:33.552 14:29:32 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.552 14:29:32 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:33.552 14:29:32 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:03:33.552 14:29:32 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:03:33.552 14:29:32 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:03:33.553 14:29:32 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:33.553 14:29:32 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:03:33.553 14:29:32 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:33.553 14:29:32 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.811 14:29:32 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:33.811 14:29:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.811 14:29:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:33.811 14:29:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:33.811 14:29:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.811 14:29:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.811 14:29:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.811 14:29:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.811 14:29:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.811 14:29:32 -- paths/export.sh@5 -- $ export PATH 00:03:33.811 14:29:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.811 14:29:32 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:33.811 14:29:32 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:33.811 14:29:32 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730730572.XXXXXX 00:03:33.811 14:29:32 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730730572.CsPmYm 00:03:33.811 14:29:32 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:33.811 14:29:32 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:33.811 14:29:32 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:33.811 14:29:32 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:33.811 14:29:32 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:33.811 14:29:32 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:33.811 14:29:32 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:33.811 14:29:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.811 14:29:32 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:03:33.811 14:29:32 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:33.811 14:29:32 -- pm/common@17 -- $ local monitor 00:03:33.811 14:29:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.811 14:29:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.811 14:29:32 -- pm/common@25 -- $ sleep 1 00:03:33.811 14:29:32 -- pm/common@21 -- $ date +%s 00:03:33.811 14:29:32 -- pm/common@21 -- $ date +%s 00:03:33.811 14:29:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730730572 00:03:33.811 14:29:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730730572 00:03:33.812 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730730572_collect-cpu-load.pm.log 00:03:33.812 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730730572_collect-vmstat.pm.log 00:03:34.747 14:29:33 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:34.747 14:29:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:34.747 14:29:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:34.747 14:29:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:34.747 14:29:33 -- spdk/autobuild.sh@16 -- $ date -u 00:03:34.747 Mon Nov 4 02:29:33 PM UTC 2024 00:03:34.747 14:29:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:34.747 v25.01-pre-170-g78b0a6b78 00:03:34.747 14:29:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:34.747 14:29:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:34.747 14:29:33 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:34.747 14:29:33 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:34.747 14:29:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:34.747 ************************************ 00:03:34.747 START TEST asan 00:03:34.747 ************************************ 00:03:34.747 using asan 00:03:34.747 14:29:33 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:03:34.747 00:03:34.747 real 0m0.000s 00:03:34.747 user 0m0.000s 00:03:34.747 sys 0m0.000s 00:03:34.747 14:29:33 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:34.747 14:29:33 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:34.747 ************************************ 00:03:34.747 END TEST asan 00:03:34.747 ************************************ 00:03:34.747 14:29:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:34.747 14:29:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:34.747 14:29:33 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:34.747 14:29:33 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:34.747 14:29:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:34.747 ************************************ 00:03:34.747 START TEST ubsan 00:03:34.747 ************************************ 00:03:34.747 using ubsan 00:03:34.747 14:29:33 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:03:34.747 00:03:34.747 real 0m0.000s 00:03:34.747 user 0m0.000s 00:03:34.747 sys 0m0.000s 00:03:34.747 14:29:33 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:34.747 14:29:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:34.747 ************************************ 00:03:34.747 END TEST ubsan 00:03:34.747 ************************************ 00:03:34.747 14:29:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:34.747 14:29:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:34.747 14:29:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:34.747 14:29:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:34.747 14:29:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:34.747 14:29:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:34.747 14:29:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:34.747 14:29:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:34.747 14:29:33 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:03:35.006 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:35.006 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:35.264 Using 'verbs' RDMA provider 00:03:51.078 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:03.309 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:03.309 Creating mk/config.mk...done. 00:04:03.309 Creating mk/cc.flags.mk...done. 00:04:03.309 Type 'make' to build. 00:04:03.309 14:30:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:03.309 14:30:01 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:04:03.309 14:30:01 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:04:03.309 14:30:01 -- common/autotest_common.sh@10 -- $ set +x 00:04:03.309 ************************************ 00:04:03.309 START TEST make 00:04:03.309 ************************************ 00:04:03.309 14:30:01 make -- common/autotest_common.sh@1127 -- $ make -j10 00:04:03.309 make[1]: Nothing to be done for 'all'. 00:04:15.514 The Meson build system 00:04:15.514 Version: 1.5.0 00:04:15.514 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:04:15.514 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:04:15.514 Build type: native build 00:04:15.514 Program cat found: YES (/usr/bin/cat) 00:04:15.514 Project name: DPDK 00:04:15.514 Project version: 24.03.0 00:04:15.514 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:15.514 C linker for the host machine: cc ld.bfd 2.40-14 00:04:15.514 Host machine cpu family: x86_64 00:04:15.514 Host machine cpu: x86_64 00:04:15.514 Message: ## Building in Developer Mode ## 00:04:15.514 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:15.514 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:04:15.514 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:15.514 Program python3 found: YES (/usr/bin/python3) 00:04:15.514 Program cat found: YES (/usr/bin/cat) 00:04:15.514 Compiler for C supports arguments -march=native: YES 00:04:15.514 Checking for size of "void *" : 8 00:04:15.514 Checking for size of "void *" : 8 (cached) 00:04:15.514 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:15.514 Library m found: YES 00:04:15.514 Library numa found: YES 00:04:15.514 Has header "numaif.h" : YES 00:04:15.514 Library fdt found: NO 00:04:15.514 Library execinfo found: NO 00:04:15.514 Has header "execinfo.h" : YES 00:04:15.514 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:15.514 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:15.515 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:15.515 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:15.515 Run-time dependency openssl found: YES 3.1.1 00:04:15.515 Run-time dependency libpcap found: YES 1.10.4 00:04:15.515 Has header "pcap.h" with dependency libpcap: YES 00:04:15.515 Compiler for C supports arguments -Wcast-qual: YES 00:04:15.515 Compiler for C supports arguments -Wdeprecated: YES 00:04:15.515 Compiler for C supports arguments -Wformat: YES 00:04:15.515 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:15.515 Compiler for C supports arguments -Wformat-security: NO 00:04:15.515 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:15.515 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:15.515 Compiler for C supports arguments -Wnested-externs: YES 00:04:15.515 Compiler for C supports arguments -Wold-style-definition: YES 00:04:15.515 Compiler for C supports arguments -Wpointer-arith: YES 00:04:15.515 Compiler for C supports arguments -Wsign-compare: YES 00:04:15.515 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:15.515 Compiler for C supports arguments -Wundef: YES 00:04:15.515 Compiler for C supports arguments -Wwrite-strings: YES 00:04:15.515 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:15.515 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:15.515 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:15.515 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:15.515 Program objdump found: YES (/usr/bin/objdump) 00:04:15.515 Compiler for C supports arguments -mavx512f: YES 00:04:15.515 Checking if "AVX512 checking" compiles: YES 00:04:15.515 Fetching value of define "__SSE4_2__" : 1 00:04:15.515 Fetching value of define "__AES__" : 1 00:04:15.515 Fetching value of define "__AVX__" : 1 00:04:15.515 Fetching value of define "__AVX2__" : 1 00:04:15.515 Fetching value of define "__AVX512BW__" : (undefined) 00:04:15.515 Fetching value of define "__AVX512CD__" : (undefined) 00:04:15.515 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:15.515 Fetching value of define "__AVX512F__" : (undefined) 00:04:15.515 Fetching value of define "__AVX512VL__" : (undefined) 00:04:15.515 Fetching value of define "__PCLMUL__" : 1 00:04:15.515 Fetching value of define "__RDRND__" : 1 00:04:15.515 Fetching value of define "__RDSEED__" : 1 00:04:15.515 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:15.515 Fetching value of define "__znver1__" : (undefined) 00:04:15.515 Fetching value of define "__znver2__" : (undefined) 00:04:15.515 Fetching value of define "__znver3__" : (undefined) 00:04:15.515 Fetching value of define "__znver4__" : (undefined) 00:04:15.515 Library asan found: YES 00:04:15.515 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:15.515 Message: lib/log: Defining dependency "log" 00:04:15.515 Message: lib/kvargs: Defining dependency "kvargs" 00:04:15.515 Message: lib/telemetry: Defining dependency "telemetry" 00:04:15.515 Library rt found: YES 00:04:15.515 Checking for function "getentropy" : NO 00:04:15.515 Message: lib/eal: Defining dependency "eal" 00:04:15.515 Message: lib/ring: Defining dependency "ring" 00:04:15.515 Message: lib/rcu: Defining dependency "rcu" 00:04:15.515 Message: lib/mempool: Defining dependency "mempool" 00:04:15.515 Message: lib/mbuf: Defining dependency "mbuf" 00:04:15.515 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:15.515 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:15.515 Compiler for C supports arguments -mpclmul: YES 00:04:15.515 Compiler for C supports arguments -maes: YES 00:04:15.515 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:15.515 Compiler for C supports arguments -mavx512bw: YES 00:04:15.515 Compiler for C supports arguments -mavx512dq: YES 00:04:15.515 Compiler for C supports arguments -mavx512vl: YES 00:04:15.515 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:15.515 Compiler for C supports arguments -mavx2: YES 00:04:15.515 Compiler for C supports arguments -mavx: YES 00:04:15.515 Message: lib/net: Defining dependency "net" 00:04:15.515 Message: lib/meter: Defining dependency "meter" 00:04:15.515 Message: lib/ethdev: Defining dependency "ethdev" 00:04:15.515 Message: lib/pci: Defining dependency "pci" 00:04:15.515 Message: lib/cmdline: Defining dependency "cmdline" 00:04:15.515 Message: lib/hash: Defining dependency "hash" 00:04:15.515 Message: lib/timer: Defining dependency "timer" 00:04:15.515 Message: lib/compressdev: Defining dependency "compressdev" 00:04:15.515 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:15.515 Message: lib/dmadev: Defining dependency "dmadev" 00:04:15.515 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:15.515 Message: lib/power: Defining dependency "power" 00:04:15.515 Message: lib/reorder: Defining dependency "reorder" 00:04:15.515 Message: lib/security: Defining dependency "security" 00:04:15.515 Has header "linux/userfaultfd.h" : YES 00:04:15.515 Has header "linux/vduse.h" : YES 00:04:15.515 Message: lib/vhost: Defining dependency "vhost" 00:04:15.515 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:15.515 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:15.515 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:15.515 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:15.515 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:15.515 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:15.515 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:15.515 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:15.515 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:15.515 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:15.515 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:15.515 Configuring doxy-api-html.conf using configuration 00:04:15.515 Configuring doxy-api-man.conf using configuration 00:04:15.515 Program mandb found: YES (/usr/bin/mandb) 00:04:15.515 Program sphinx-build found: NO 00:04:15.515 Configuring rte_build_config.h using configuration 00:04:15.515 Message: 00:04:15.515 ================= 00:04:15.515 Applications Enabled 00:04:15.515 ================= 00:04:15.515 00:04:15.515 apps: 00:04:15.515 00:04:15.515 00:04:15.515 Message: 00:04:15.515 ================= 00:04:15.515 Libraries Enabled 00:04:15.515 ================= 00:04:15.515 00:04:15.515 libs: 00:04:15.515 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:15.515 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:15.515 cryptodev, dmadev, power, reorder, security, vhost, 00:04:15.515 00:04:15.515 Message: 00:04:15.515 =============== 00:04:15.515 Drivers Enabled 00:04:15.515 =============== 00:04:15.515 00:04:15.515 common: 00:04:15.515 00:04:15.515 bus: 00:04:15.515 pci, vdev, 00:04:15.515 mempool: 00:04:15.515 ring, 00:04:15.515 dma: 00:04:15.515 00:04:15.515 net: 00:04:15.515 00:04:15.515 crypto: 00:04:15.515 00:04:15.515 compress: 00:04:15.515 00:04:15.515 vdpa: 00:04:15.515 00:04:15.515 00:04:15.515 Message: 00:04:15.515 ================= 00:04:15.515 Content Skipped 00:04:15.515 ================= 00:04:15.515 00:04:15.515 apps: 00:04:15.515 dumpcap: explicitly disabled via build config 00:04:15.515 graph: explicitly disabled via build config 00:04:15.515 pdump: explicitly disabled via build config 00:04:15.515 proc-info: explicitly disabled via build config 00:04:15.515 test-acl: explicitly disabled via build config 00:04:15.515 test-bbdev: explicitly disabled via build config 00:04:15.515 test-cmdline: explicitly disabled via build config 00:04:15.515 test-compress-perf: explicitly disabled via build config 00:04:15.515 test-crypto-perf: explicitly disabled via build config 00:04:15.515 test-dma-perf: explicitly disabled via build config 00:04:15.515 test-eventdev: explicitly disabled via build config 00:04:15.515 test-fib: explicitly disabled via build config 00:04:15.515 test-flow-perf: explicitly disabled via build config 00:04:15.515 test-gpudev: explicitly disabled via build config 00:04:15.515 test-mldev: explicitly disabled via build config 00:04:15.515 test-pipeline: explicitly disabled via build config 00:04:15.515 test-pmd: explicitly disabled via build config 00:04:15.515 test-regex: explicitly disabled via build config 00:04:15.515 test-sad: explicitly disabled via build config 00:04:15.515 test-security-perf: explicitly disabled via build config 00:04:15.515 00:04:15.515 libs: 00:04:15.515 argparse: explicitly disabled via build config 00:04:15.515 metrics: explicitly disabled via build config 00:04:15.515 acl: explicitly disabled via build config 00:04:15.515 bbdev: explicitly disabled via build config 00:04:15.515 bitratestats: explicitly disabled via build config 00:04:15.515 bpf: explicitly disabled via build config 00:04:15.515 cfgfile: explicitly disabled via build config 00:04:15.515 distributor: explicitly disabled via build config 00:04:15.515 efd: explicitly disabled via build config 00:04:15.515 eventdev: explicitly disabled via build config 00:04:15.515 dispatcher: explicitly disabled via build config 00:04:15.515 gpudev: explicitly disabled via build config 00:04:15.515 gro: explicitly disabled via build config 00:04:15.515 gso: explicitly disabled via build config 00:04:15.515 ip_frag: explicitly disabled via build config 00:04:15.515 jobstats: explicitly disabled via build config 00:04:15.515 latencystats: explicitly disabled via build config 00:04:15.515 lpm: explicitly disabled via build config 00:04:15.515 member: explicitly disabled via build config 00:04:15.515 pcapng: explicitly disabled via build config 00:04:15.515 rawdev: explicitly disabled via build config 00:04:15.515 regexdev: explicitly disabled via build config 00:04:15.515 mldev: explicitly disabled via build config 00:04:15.515 rib: explicitly disabled via build config 00:04:15.515 sched: explicitly disabled via build config 00:04:15.515 stack: explicitly disabled via build config 00:04:15.515 ipsec: explicitly disabled via build config 00:04:15.515 pdcp: explicitly disabled via build config 00:04:15.515 fib: explicitly disabled via build config 00:04:15.515 port: explicitly disabled via build config 00:04:15.515 pdump: explicitly disabled via build config 00:04:15.515 table: explicitly disabled via build config 00:04:15.515 pipeline: explicitly disabled via build config 00:04:15.515 graph: explicitly disabled via build config 00:04:15.515 node: explicitly disabled via build config 00:04:15.515 00:04:15.515 drivers: 00:04:15.516 common/cpt: not in enabled drivers build config 00:04:15.516 common/dpaax: not in enabled drivers build config 00:04:15.516 common/iavf: not in enabled drivers build config 00:04:15.516 common/idpf: not in enabled drivers build config 00:04:15.516 common/ionic: not in enabled drivers build config 00:04:15.516 common/mvep: not in enabled drivers build config 00:04:15.516 common/octeontx: not in enabled drivers build config 00:04:15.516 bus/auxiliary: not in enabled drivers build config 00:04:15.516 bus/cdx: not in enabled drivers build config 00:04:15.516 bus/dpaa: not in enabled drivers build config 00:04:15.516 bus/fslmc: not in enabled drivers build config 00:04:15.516 bus/ifpga: not in enabled drivers build config 00:04:15.516 bus/platform: not in enabled drivers build config 00:04:15.516 bus/uacce: not in enabled drivers build config 00:04:15.516 bus/vmbus: not in enabled drivers build config 00:04:15.516 common/cnxk: not in enabled drivers build config 00:04:15.516 common/mlx5: not in enabled drivers build config 00:04:15.516 common/nfp: not in enabled drivers build config 00:04:15.516 common/nitrox: not in enabled drivers build config 00:04:15.516 common/qat: not in enabled drivers build config 00:04:15.516 common/sfc_efx: not in enabled drivers build config 00:04:15.516 mempool/bucket: not in enabled drivers build config 00:04:15.516 mempool/cnxk: not in enabled drivers build config 00:04:15.516 mempool/dpaa: not in enabled drivers build config 00:04:15.516 mempool/dpaa2: not in enabled drivers build config 00:04:15.516 mempool/octeontx: not in enabled drivers build config 00:04:15.516 mempool/stack: not in enabled drivers build config 00:04:15.516 dma/cnxk: not in enabled drivers build config 00:04:15.516 dma/dpaa: not in enabled drivers build config 00:04:15.516 dma/dpaa2: not in enabled drivers build config 00:04:15.516 dma/hisilicon: not in enabled drivers build config 00:04:15.516 dma/idxd: not in enabled drivers build config 00:04:15.516 dma/ioat: not in enabled drivers build config 00:04:15.516 dma/skeleton: not in enabled drivers build config 00:04:15.516 net/af_packet: not in enabled drivers build config 00:04:15.516 net/af_xdp: not in enabled drivers build config 00:04:15.516 net/ark: not in enabled drivers build config 00:04:15.516 net/atlantic: not in enabled drivers build config 00:04:15.516 net/avp: not in enabled drivers build config 00:04:15.516 net/axgbe: not in enabled drivers build config 00:04:15.516 net/bnx2x: not in enabled drivers build config 00:04:15.516 net/bnxt: not in enabled drivers build config 00:04:15.516 net/bonding: not in enabled drivers build config 00:04:15.516 net/cnxk: not in enabled drivers build config 00:04:15.516 net/cpfl: not in enabled drivers build config 00:04:15.516 net/cxgbe: not in enabled drivers build config 00:04:15.516 net/dpaa: not in enabled drivers build config 00:04:15.516 net/dpaa2: not in enabled drivers build config 00:04:15.516 net/e1000: not in enabled drivers build config 00:04:15.516 net/ena: not in enabled drivers build config 00:04:15.516 net/enetc: not in enabled drivers build config 00:04:15.516 net/enetfec: not in enabled drivers build config 00:04:15.516 net/enic: not in enabled drivers build config 00:04:15.516 net/failsafe: not in enabled drivers build config 00:04:15.516 net/fm10k: not in enabled drivers build config 00:04:15.516 net/gve: not in enabled drivers build config 00:04:15.516 net/hinic: not in enabled drivers build config 00:04:15.516 net/hns3: not in enabled drivers build config 00:04:15.516 net/i40e: not in enabled drivers build config 00:04:15.516 net/iavf: not in enabled drivers build config 00:04:15.516 net/ice: not in enabled drivers build config 00:04:15.516 net/idpf: not in enabled drivers build config 00:04:15.516 net/igc: not in enabled drivers build config 00:04:15.516 net/ionic: not in enabled drivers build config 00:04:15.516 net/ipn3ke: not in enabled drivers build config 00:04:15.516 net/ixgbe: not in enabled drivers build config 00:04:15.516 net/mana: not in enabled drivers build config 00:04:15.516 net/memif: not in enabled drivers build config 00:04:15.516 net/mlx4: not in enabled drivers build config 00:04:15.516 net/mlx5: not in enabled drivers build config 00:04:15.516 net/mvneta: not in enabled drivers build config 00:04:15.516 net/mvpp2: not in enabled drivers build config 00:04:15.516 net/netvsc: not in enabled drivers build config 00:04:15.516 net/nfb: not in enabled drivers build config 00:04:15.516 net/nfp: not in enabled drivers build config 00:04:15.516 net/ngbe: not in enabled drivers build config 00:04:15.516 net/null: not in enabled drivers build config 00:04:15.516 net/octeontx: not in enabled drivers build config 00:04:15.516 net/octeon_ep: not in enabled drivers build config 00:04:15.516 net/pcap: not in enabled drivers build config 00:04:15.516 net/pfe: not in enabled drivers build config 00:04:15.516 net/qede: not in enabled drivers build config 00:04:15.516 net/ring: not in enabled drivers build config 00:04:15.516 net/sfc: not in enabled drivers build config 00:04:15.516 net/softnic: not in enabled drivers build config 00:04:15.516 net/tap: not in enabled drivers build config 00:04:15.516 net/thunderx: not in enabled drivers build config 00:04:15.516 net/txgbe: not in enabled drivers build config 00:04:15.516 net/vdev_netvsc: not in enabled drivers build config 00:04:15.516 net/vhost: not in enabled drivers build config 00:04:15.516 net/virtio: not in enabled drivers build config 00:04:15.516 net/vmxnet3: not in enabled drivers build config 00:04:15.516 raw/*: missing internal dependency, "rawdev" 00:04:15.516 crypto/armv8: not in enabled drivers build config 00:04:15.516 crypto/bcmfs: not in enabled drivers build config 00:04:15.516 crypto/caam_jr: not in enabled drivers build config 00:04:15.516 crypto/ccp: not in enabled drivers build config 00:04:15.516 crypto/cnxk: not in enabled drivers build config 00:04:15.516 crypto/dpaa_sec: not in enabled drivers build config 00:04:15.516 crypto/dpaa2_sec: not in enabled drivers build config 00:04:15.516 crypto/ipsec_mb: not in enabled drivers build config 00:04:15.516 crypto/mlx5: not in enabled drivers build config 00:04:15.516 crypto/mvsam: not in enabled drivers build config 00:04:15.516 crypto/nitrox: not in enabled drivers build config 00:04:15.516 crypto/null: not in enabled drivers build config 00:04:15.516 crypto/octeontx: not in enabled drivers build config 00:04:15.516 crypto/openssl: not in enabled drivers build config 00:04:15.516 crypto/scheduler: not in enabled drivers build config 00:04:15.516 crypto/uadk: not in enabled drivers build config 00:04:15.516 crypto/virtio: not in enabled drivers build config 00:04:15.516 compress/isal: not in enabled drivers build config 00:04:15.516 compress/mlx5: not in enabled drivers build config 00:04:15.516 compress/nitrox: not in enabled drivers build config 00:04:15.516 compress/octeontx: not in enabled drivers build config 00:04:15.516 compress/zlib: not in enabled drivers build config 00:04:15.516 regex/*: missing internal dependency, "regexdev" 00:04:15.516 ml/*: missing internal dependency, "mldev" 00:04:15.516 vdpa/ifc: not in enabled drivers build config 00:04:15.516 vdpa/mlx5: not in enabled drivers build config 00:04:15.516 vdpa/nfp: not in enabled drivers build config 00:04:15.516 vdpa/sfc: not in enabled drivers build config 00:04:15.516 event/*: missing internal dependency, "eventdev" 00:04:15.516 baseband/*: missing internal dependency, "bbdev" 00:04:15.516 gpu/*: missing internal dependency, "gpudev" 00:04:15.516 00:04:15.516 00:04:15.516 Build targets in project: 85 00:04:15.516 00:04:15.516 DPDK 24.03.0 00:04:15.516 00:04:15.516 User defined options 00:04:15.516 buildtype : debug 00:04:15.516 default_library : shared 00:04:15.516 libdir : lib 00:04:15.516 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:15.516 b_sanitize : address 00:04:15.516 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:15.516 c_link_args : 00:04:15.516 cpu_instruction_set: native 00:04:15.516 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:04:15.516 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:04:15.516 enable_docs : false 00:04:15.516 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:15.516 enable_kmods : false 00:04:15.516 max_lcores : 128 00:04:15.516 tests : false 00:04:15.516 00:04:15.516 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:15.775 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:04:15.775 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:15.775 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:15.775 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:15.775 [4/268] Linking static target lib/librte_log.a 00:04:15.775 [5/268] Linking static target lib/librte_kvargs.a 00:04:15.775 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:16.342 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:16.601 [8/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.601 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:16.601 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:16.601 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:16.601 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:16.861 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:16.861 [14/268] Linking static target lib/librte_telemetry.a 00:04:16.861 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:16.861 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:16.861 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:16.861 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:16.861 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.120 [20/268] Linking target lib/librte_log.so.24.1 00:04:17.378 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:17.378 [22/268] Linking target lib/librte_kvargs.so.24.1 00:04:17.378 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:17.636 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:17.636 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:17.636 [26/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.636 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:17.895 [28/268] Linking target lib/librte_telemetry.so.24.1 00:04:17.895 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:17.895 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:17.895 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:17.895 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:18.154 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:18.154 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:18.154 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:18.154 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:18.412 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:18.671 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:18.671 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:18.671 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:18.671 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:18.671 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:18.929 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:18.929 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:18.929 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:19.188 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:19.188 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:19.446 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:19.446 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:19.446 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:19.705 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:19.705 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:19.963 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:19.963 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:19.963 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:20.220 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:20.220 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:20.478 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:20.478 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:20.478 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:20.478 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:20.478 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:20.791 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:20.792 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:21.050 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:21.050 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:21.050 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:21.309 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:21.309 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:21.569 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:21.569 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:21.569 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:21.827 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:21.827 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:21.827 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:21.827 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:21.827 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:22.092 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:22.368 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:22.368 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:22.368 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:22.368 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:22.627 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:22.627 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:22.627 [85/268] Linking static target lib/librte_eal.a 00:04:22.885 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:22.885 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:22.885 [88/268] Linking static target lib/librte_ring.a 00:04:23.142 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:23.142 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:23.142 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:23.400 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:23.400 [93/268] Linking static target lib/librte_mempool.a 00:04:23.661 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:23.661 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:23.661 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:23.661 [97/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:23.661 [98/268] Linking static target lib/librte_rcu.a 00:04:23.919 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:24.178 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:24.178 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:24.178 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:24.435 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.693 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:24.693 [105/268] Linking static target lib/librte_mbuf.a 00:04:24.693 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:24.693 [107/268] Linking static target lib/librte_meter.a 00:04:24.693 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:24.693 [109/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:24.693 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:24.693 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:24.951 [112/268] Linking static target lib/librte_net.a 00:04:24.951 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:24.951 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:25.209 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.209 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.467 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:25.467 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:25.726 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:25.726 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:25.984 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:26.243 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:26.502 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:26.762 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:26.762 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:26.762 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:26.762 [127/268] Linking static target lib/librte_pci.a 00:04:27.025 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:27.025 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:27.025 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:27.025 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:27.025 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:27.283 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:27.283 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:27.283 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.283 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:27.283 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:27.283 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:27.283 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:27.283 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:27.283 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:27.542 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:27.542 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:27.542 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:27.542 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:27.803 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:28.064 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:28.064 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:28.064 [149/268] Linking static target lib/librte_cmdline.a 00:04:28.064 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:28.341 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:28.341 [152/268] Linking static target lib/librte_timer.a 00:04:28.612 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:28.871 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:28.871 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:28.871 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:28.871 [157/268] Linking static target lib/librte_ethdev.a 00:04:29.130 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.389 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:29.649 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:29.649 [161/268] Linking static target lib/librte_compressdev.a 00:04:29.649 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:29.649 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:29.649 [164/268] Linking static target lib/librte_hash.a 00:04:29.649 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:29.908 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:29.908 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:30.167 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:30.425 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:30.425 [170/268] Linking static target lib/librte_dmadev.a 00:04:30.425 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:30.425 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:30.425 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:30.682 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.249 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:31.249 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:31.249 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:31.249 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.509 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:31.509 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:31.768 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:31.768 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:31.768 [183/268] Linking static target lib/librte_cryptodev.a 00:04:32.027 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:32.027 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:32.285 [186/268] Linking static target lib/librte_power.a 00:04:32.544 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:32.544 [188/268] Linking static target lib/librte_security.a 00:04:32.544 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:32.802 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:32.802 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:32.802 [192/268] Linking static target lib/librte_reorder.a 00:04:32.802 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:33.369 [194/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.369 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.629 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:33.629 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:33.629 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:34.196 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:34.196 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:34.455 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:34.455 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:34.714 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.714 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:34.973 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:34.973 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:35.233 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:35.491 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:35.492 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:35.750 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:35.750 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:35.750 [212/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:35.750 [213/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:36.010 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:36.010 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:36.010 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:36.010 [217/268] Linking static target drivers/librte_bus_vdev.a 00:04:36.010 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:36.010 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:36.010 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:36.010 [221/268] Linking static target drivers/librte_bus_pci.a 00:04:36.010 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:36.010 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:36.010 [224/268] Linking static target drivers/librte_mempool_ring.a 00:04:36.010 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:36.269 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.528 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.787 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.045 [229/268] Linking target lib/librte_eal.so.24.1 00:04:37.304 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:37.304 [231/268] Linking target lib/librte_meter.so.24.1 00:04:37.304 [232/268] Linking target lib/librte_pci.so.24.1 00:04:37.304 [233/268] Linking target lib/librte_ring.so.24.1 00:04:37.304 [234/268] Linking target lib/librte_timer.so.24.1 00:04:37.304 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:37.304 [236/268] Linking target lib/librte_dmadev.so.24.1 00:04:37.562 [237/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:37.562 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:37.562 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:37.562 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:37.562 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:37.562 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:37.562 [243/268] Linking target lib/librte_mempool.so.24.1 00:04:37.562 [244/268] Linking target lib/librte_rcu.so.24.1 00:04:37.828 [245/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:37.828 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:37.828 [247/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:37.828 [248/268] Linking target lib/librte_mbuf.so.24.1 00:04:37.828 [249/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:38.088 [250/268] Linking target lib/librte_reorder.so.24.1 00:04:38.088 [251/268] Linking target lib/librte_compressdev.so.24.1 00:04:38.088 [252/268] Linking target lib/librte_net.so.24.1 00:04:38.088 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:04:38.088 [254/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:38.348 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:38.348 [256/268] Linking target lib/librte_hash.so.24.1 00:04:38.348 [257/268] Linking target lib/librte_cmdline.so.24.1 00:04:38.348 [258/268] Linking target lib/librte_security.so.24.1 00:04:38.348 [259/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:38.916 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.916 [261/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:38.916 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:38.916 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:39.175 [264/268] Linking target lib/librte_power.so.24.1 00:04:42.492 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:42.492 [266/268] Linking static target lib/librte_vhost.a 00:04:43.869 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:43.869 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:43.869 INFO: autodetecting backend as ninja 00:04:43.869 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:05:05.799 CC lib/ut_mock/mock.o 00:05:05.799 CC lib/ut/ut.o 00:05:05.799 CC lib/log/log.o 00:05:05.799 CC lib/log/log_flags.o 00:05:05.799 CC lib/log/log_deprecated.o 00:05:05.799 LIB libspdk_ut_mock.a 00:05:05.799 LIB libspdk_ut.a 00:05:05.799 LIB libspdk_log.a 00:05:05.799 SO libspdk_ut_mock.so.6.0 00:05:05.799 SO libspdk_ut.so.2.0 00:05:05.799 SO libspdk_log.so.7.1 00:05:05.799 SYMLINK libspdk_ut_mock.so 00:05:05.799 SYMLINK libspdk_ut.so 00:05:05.799 SYMLINK libspdk_log.so 00:05:05.799 CC lib/dma/dma.o 00:05:05.799 CXX lib/trace_parser/trace.o 00:05:05.799 CC lib/ioat/ioat.o 00:05:05.799 CC lib/util/base64.o 00:05:05.799 CC lib/util/bit_array.o 00:05:05.799 CC lib/util/cpuset.o 00:05:05.799 CC lib/util/crc32.o 00:05:05.799 CC lib/util/crc16.o 00:05:05.799 CC lib/util/crc32c.o 00:05:05.799 CC lib/vfio_user/host/vfio_user_pci.o 00:05:05.799 CC lib/util/crc32_ieee.o 00:05:05.799 CC lib/util/crc64.o 00:05:05.799 CC lib/util/dif.o 00:05:05.799 CC lib/vfio_user/host/vfio_user.o 00:05:05.799 LIB libspdk_dma.a 00:05:05.799 CC lib/util/fd.o 00:05:05.799 CC lib/util/fd_group.o 00:05:05.799 SO libspdk_dma.so.5.0 00:05:06.057 CC lib/util/file.o 00:05:06.057 CC lib/util/hexlify.o 00:05:06.057 SYMLINK libspdk_dma.so 00:05:06.057 CC lib/util/iov.o 00:05:06.057 LIB libspdk_ioat.a 00:05:06.057 CC lib/util/math.o 00:05:06.057 SO libspdk_ioat.so.7.0 00:05:06.057 CC lib/util/net.o 00:05:06.057 LIB libspdk_vfio_user.a 00:05:06.057 CC lib/util/pipe.o 00:05:06.057 SO libspdk_vfio_user.so.5.0 00:05:06.057 CC lib/util/strerror_tls.o 00:05:06.057 SYMLINK libspdk_ioat.so 00:05:06.057 CC lib/util/string.o 00:05:06.317 SYMLINK libspdk_vfio_user.so 00:05:06.317 CC lib/util/uuid.o 00:05:06.317 CC lib/util/xor.o 00:05:06.317 CC lib/util/zipf.o 00:05:06.317 CC lib/util/md5.o 00:05:06.575 LIB libspdk_util.a 00:05:06.575 SO libspdk_util.so.10.1 00:05:06.878 LIB libspdk_trace_parser.a 00:05:06.878 SYMLINK libspdk_util.so 00:05:06.878 SO libspdk_trace_parser.so.6.0 00:05:06.878 SYMLINK libspdk_trace_parser.so 00:05:07.138 CC lib/idxd/idxd.o 00:05:07.138 CC lib/idxd/idxd_user.o 00:05:07.138 CC lib/json/json_parse.o 00:05:07.138 CC lib/rdma_utils/rdma_utils.o 00:05:07.138 CC lib/json/json_util.o 00:05:07.138 CC lib/idxd/idxd_kernel.o 00:05:07.138 CC lib/json/json_write.o 00:05:07.138 CC lib/conf/conf.o 00:05:07.138 CC lib/env_dpdk/env.o 00:05:07.138 CC lib/vmd/vmd.o 00:05:07.138 CC lib/vmd/led.o 00:05:07.396 LIB libspdk_conf.a 00:05:07.396 CC lib/env_dpdk/memory.o 00:05:07.396 CC lib/env_dpdk/pci.o 00:05:07.396 CC lib/env_dpdk/init.o 00:05:07.396 SO libspdk_conf.so.6.0 00:05:07.396 LIB libspdk_json.a 00:05:07.396 SYMLINK libspdk_conf.so 00:05:07.396 CC lib/env_dpdk/threads.o 00:05:07.396 CC lib/env_dpdk/pci_ioat.o 00:05:07.396 SO libspdk_json.so.6.0 00:05:07.396 LIB libspdk_rdma_utils.a 00:05:07.396 SO libspdk_rdma_utils.so.1.0 00:05:07.396 SYMLINK libspdk_json.so 00:05:07.654 SYMLINK libspdk_rdma_utils.so 00:05:07.654 CC lib/env_dpdk/pci_virtio.o 00:05:07.654 CC lib/env_dpdk/pci_vmd.o 00:05:07.654 CC lib/jsonrpc/jsonrpc_server.o 00:05:07.654 CC lib/env_dpdk/pci_idxd.o 00:05:07.654 CC lib/rdma_provider/common.o 00:05:07.654 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:07.654 CC lib/env_dpdk/pci_event.o 00:05:07.913 CC lib/env_dpdk/sigbus_handler.o 00:05:07.913 CC lib/env_dpdk/pci_dpdk.o 00:05:07.913 LIB libspdk_idxd.a 00:05:07.913 SO libspdk_idxd.so.12.1 00:05:07.913 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:07.913 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:07.913 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:07.913 LIB libspdk_rdma_provider.a 00:05:07.913 SYMLINK libspdk_idxd.so 00:05:07.913 CC lib/jsonrpc/jsonrpc_client.o 00:05:07.913 LIB libspdk_vmd.a 00:05:07.913 SO libspdk_rdma_provider.so.7.0 00:05:07.913 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:07.914 SO libspdk_vmd.so.6.0 00:05:08.173 SYMLINK libspdk_rdma_provider.so 00:05:08.173 SYMLINK libspdk_vmd.so 00:05:08.173 LIB libspdk_jsonrpc.a 00:05:08.431 SO libspdk_jsonrpc.so.6.0 00:05:08.431 SYMLINK libspdk_jsonrpc.so 00:05:08.690 CC lib/rpc/rpc.o 00:05:08.949 LIB libspdk_rpc.a 00:05:08.949 SO libspdk_rpc.so.6.0 00:05:08.949 LIB libspdk_env_dpdk.a 00:05:08.949 SYMLINK libspdk_rpc.so 00:05:08.949 SO libspdk_env_dpdk.so.15.1 00:05:09.208 CC lib/notify/notify.o 00:05:09.208 SYMLINK libspdk_env_dpdk.so 00:05:09.208 CC lib/notify/notify_rpc.o 00:05:09.208 CC lib/trace/trace.o 00:05:09.208 CC lib/keyring/keyring.o 00:05:09.208 CC lib/keyring/keyring_rpc.o 00:05:09.208 CC lib/trace/trace_flags.o 00:05:09.208 CC lib/trace/trace_rpc.o 00:05:09.469 LIB libspdk_notify.a 00:05:09.469 SO libspdk_notify.so.6.0 00:05:09.469 LIB libspdk_keyring.a 00:05:09.469 SYMLINK libspdk_notify.so 00:05:09.469 SO libspdk_keyring.so.2.0 00:05:09.469 LIB libspdk_trace.a 00:05:09.469 SO libspdk_trace.so.11.0 00:05:09.728 SYMLINK libspdk_keyring.so 00:05:09.728 SYMLINK libspdk_trace.so 00:05:09.986 CC lib/thread/thread.o 00:05:09.986 CC lib/thread/iobuf.o 00:05:09.986 CC lib/sock/sock.o 00:05:09.986 CC lib/sock/sock_rpc.o 00:05:10.553 LIB libspdk_sock.a 00:05:10.553 SO libspdk_sock.so.10.0 00:05:10.553 SYMLINK libspdk_sock.so 00:05:10.812 CC lib/nvme/nvme_ctrlr.o 00:05:10.812 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:10.812 CC lib/nvme/nvme_fabric.o 00:05:10.812 CC lib/nvme/nvme_ns_cmd.o 00:05:10.812 CC lib/nvme/nvme_ns.o 00:05:10.812 CC lib/nvme/nvme_pcie_common.o 00:05:10.812 CC lib/nvme/nvme_pcie.o 00:05:10.812 CC lib/nvme/nvme.o 00:05:10.812 CC lib/nvme/nvme_qpair.o 00:05:11.748 CC lib/nvme/nvme_quirks.o 00:05:11.748 CC lib/nvme/nvme_transport.o 00:05:11.748 CC lib/nvme/nvme_discovery.o 00:05:12.008 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:12.008 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:12.008 CC lib/nvme/nvme_tcp.o 00:05:12.008 CC lib/nvme/nvme_opal.o 00:05:12.008 LIB libspdk_thread.a 00:05:12.008 SO libspdk_thread.so.11.0 00:05:12.266 SYMLINK libspdk_thread.so 00:05:12.266 CC lib/nvme/nvme_io_msg.o 00:05:12.266 CC lib/nvme/nvme_poll_group.o 00:05:12.266 CC lib/nvme/nvme_zns.o 00:05:12.525 CC lib/nvme/nvme_stubs.o 00:05:12.525 CC lib/nvme/nvme_auth.o 00:05:12.784 CC lib/nvme/nvme_cuse.o 00:05:12.784 CC lib/nvme/nvme_rdma.o 00:05:12.784 CC lib/accel/accel.o 00:05:13.043 CC lib/accel/accel_rpc.o 00:05:13.043 CC lib/accel/accel_sw.o 00:05:13.301 CC lib/blob/blobstore.o 00:05:13.301 CC lib/init/json_config.o 00:05:13.560 CC lib/init/subsystem.o 00:05:13.560 CC lib/init/subsystem_rpc.o 00:05:13.560 CC lib/virtio/virtio.o 00:05:13.560 CC lib/virtio/virtio_vhost_user.o 00:05:13.819 CC lib/init/rpc.o 00:05:13.819 CC lib/virtio/virtio_vfio_user.o 00:05:13.819 CC lib/virtio/virtio_pci.o 00:05:13.819 CC lib/fsdev/fsdev.o 00:05:13.819 LIB libspdk_init.a 00:05:13.819 SO libspdk_init.so.6.0 00:05:14.077 CC lib/fsdev/fsdev_io.o 00:05:14.077 CC lib/blob/request.o 00:05:14.077 SYMLINK libspdk_init.so 00:05:14.077 CC lib/fsdev/fsdev_rpc.o 00:05:14.077 CC lib/blob/zeroes.o 00:05:14.077 CC lib/blob/blob_bs_dev.o 00:05:14.077 LIB libspdk_virtio.a 00:05:14.342 CC lib/event/app.o 00:05:14.342 LIB libspdk_accel.a 00:05:14.342 SO libspdk_virtio.so.7.0 00:05:14.342 SO libspdk_accel.so.16.0 00:05:14.342 CC lib/event/reactor.o 00:05:14.342 SYMLINK libspdk_virtio.so 00:05:14.342 CC lib/event/log_rpc.o 00:05:14.342 SYMLINK libspdk_accel.so 00:05:14.342 CC lib/event/app_rpc.o 00:05:14.342 CC lib/event/scheduler_static.o 00:05:14.601 CC lib/bdev/bdev_rpc.o 00:05:14.601 CC lib/bdev/bdev.o 00:05:14.601 CC lib/bdev/bdev_zone.o 00:05:14.601 CC lib/bdev/part.o 00:05:14.601 LIB libspdk_nvme.a 00:05:14.601 LIB libspdk_fsdev.a 00:05:14.601 CC lib/bdev/scsi_nvme.o 00:05:14.601 SO libspdk_fsdev.so.2.0 00:05:14.860 SYMLINK libspdk_fsdev.so 00:05:14.860 SO libspdk_nvme.so.15.0 00:05:14.860 LIB libspdk_event.a 00:05:14.860 SO libspdk_event.so.14.0 00:05:15.120 SYMLINK libspdk_event.so 00:05:15.120 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:15.120 SYMLINK libspdk_nvme.so 00:05:16.056 LIB libspdk_fuse_dispatcher.a 00:05:16.056 SO libspdk_fuse_dispatcher.so.1.0 00:05:16.056 SYMLINK libspdk_fuse_dispatcher.so 00:05:17.960 LIB libspdk_blob.a 00:05:17.960 SO libspdk_blob.so.11.0 00:05:18.218 SYMLINK libspdk_blob.so 00:05:18.218 LIB libspdk_bdev.a 00:05:18.218 SO libspdk_bdev.so.17.0 00:05:18.218 SYMLINK libspdk_bdev.so 00:05:18.218 CC lib/blobfs/blobfs.o 00:05:18.218 CC lib/blobfs/tree.o 00:05:18.218 CC lib/lvol/lvol.o 00:05:18.477 CC lib/nvmf/ctrlr.o 00:05:18.477 CC lib/nvmf/ctrlr_discovery.o 00:05:18.477 CC lib/nvmf/ctrlr_bdev.o 00:05:18.477 CC lib/nbd/nbd.o 00:05:18.477 CC lib/ublk/ublk.o 00:05:18.477 CC lib/scsi/dev.o 00:05:18.477 CC lib/ftl/ftl_core.o 00:05:18.477 CC lib/scsi/lun.o 00:05:18.754 CC lib/ftl/ftl_init.o 00:05:19.058 CC lib/scsi/port.o 00:05:19.058 CC lib/nbd/nbd_rpc.o 00:05:19.058 CC lib/ftl/ftl_layout.o 00:05:19.058 CC lib/ftl/ftl_debug.o 00:05:19.058 CC lib/scsi/scsi.o 00:05:19.316 CC lib/scsi/scsi_bdev.o 00:05:19.316 LIB libspdk_nbd.a 00:05:19.316 SO libspdk_nbd.so.7.0 00:05:19.316 CC lib/ublk/ublk_rpc.o 00:05:19.316 SYMLINK libspdk_nbd.so 00:05:19.316 CC lib/nvmf/subsystem.o 00:05:19.316 CC lib/nvmf/nvmf.o 00:05:19.316 CC lib/scsi/scsi_pr.o 00:05:19.316 CC lib/nvmf/nvmf_rpc.o 00:05:19.575 LIB libspdk_blobfs.a 00:05:19.575 CC lib/ftl/ftl_io.o 00:05:19.575 SO libspdk_blobfs.so.10.0 00:05:19.575 LIB libspdk_ublk.a 00:05:19.575 SO libspdk_ublk.so.3.0 00:05:19.575 SYMLINK libspdk_blobfs.so 00:05:19.575 CC lib/ftl/ftl_sb.o 00:05:19.575 LIB libspdk_lvol.a 00:05:19.575 SYMLINK libspdk_ublk.so 00:05:19.575 CC lib/ftl/ftl_l2p.o 00:05:19.575 SO libspdk_lvol.so.10.0 00:05:19.834 SYMLINK libspdk_lvol.so 00:05:19.834 CC lib/ftl/ftl_l2p_flat.o 00:05:19.834 CC lib/ftl/ftl_nv_cache.o 00:05:19.834 CC lib/nvmf/transport.o 00:05:19.834 CC lib/scsi/scsi_rpc.o 00:05:19.834 CC lib/scsi/task.o 00:05:19.834 CC lib/ftl/ftl_band.o 00:05:20.092 CC lib/nvmf/tcp.o 00:05:20.092 CC lib/ftl/ftl_band_ops.o 00:05:20.092 LIB libspdk_scsi.a 00:05:20.092 SO libspdk_scsi.so.9.0 00:05:20.351 SYMLINK libspdk_scsi.so 00:05:20.351 CC lib/nvmf/stubs.o 00:05:20.351 CC lib/nvmf/mdns_server.o 00:05:20.610 CC lib/nvmf/rdma.o 00:05:20.610 CC lib/nvmf/auth.o 00:05:20.610 CC lib/iscsi/conn.o 00:05:20.610 CC lib/iscsi/init_grp.o 00:05:20.868 CC lib/iscsi/iscsi.o 00:05:20.868 CC lib/ftl/ftl_writer.o 00:05:21.126 CC lib/iscsi/param.o 00:05:21.126 CC lib/iscsi/portal_grp.o 00:05:21.126 CC lib/iscsi/tgt_node.o 00:05:21.126 CC lib/ftl/ftl_rq.o 00:05:21.385 CC lib/iscsi/iscsi_subsystem.o 00:05:21.385 CC lib/iscsi/iscsi_rpc.o 00:05:21.385 CC lib/ftl/ftl_reloc.o 00:05:21.385 CC lib/iscsi/task.o 00:05:21.643 CC lib/ftl/ftl_l2p_cache.o 00:05:21.643 CC lib/vhost/vhost.o 00:05:21.643 CC lib/ftl/ftl_p2l.o 00:05:21.643 CC lib/vhost/vhost_rpc.o 00:05:21.901 CC lib/ftl/ftl_p2l_log.o 00:05:21.901 CC lib/vhost/vhost_scsi.o 00:05:21.901 CC lib/vhost/vhost_blk.o 00:05:22.159 CC lib/ftl/mngt/ftl_mngt.o 00:05:22.159 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:22.159 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:22.418 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:22.418 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:22.418 CC lib/vhost/rte_vhost_user.o 00:05:22.418 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:22.418 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:22.676 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:22.676 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:22.676 LIB libspdk_iscsi.a 00:05:22.676 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:22.676 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:22.676 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:22.934 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:22.934 SO libspdk_iscsi.so.8.0 00:05:22.934 CC lib/ftl/utils/ftl_conf.o 00:05:22.934 CC lib/ftl/utils/ftl_md.o 00:05:22.934 CC lib/ftl/utils/ftl_mempool.o 00:05:22.934 SYMLINK libspdk_iscsi.so 00:05:22.934 CC lib/ftl/utils/ftl_bitmap.o 00:05:22.934 CC lib/ftl/utils/ftl_property.o 00:05:22.934 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:23.199 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:23.199 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:23.199 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:23.199 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:23.199 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:23.199 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:23.199 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:23.457 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:23.457 LIB libspdk_nvmf.a 00:05:23.457 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:23.457 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:23.457 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:23.457 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:23.457 CC lib/ftl/base/ftl_base_dev.o 00:05:23.457 CC lib/ftl/base/ftl_base_bdev.o 00:05:23.457 SO libspdk_nvmf.so.20.0 00:05:23.715 CC lib/ftl/ftl_trace.o 00:05:23.715 LIB libspdk_vhost.a 00:05:23.715 SO libspdk_vhost.so.8.0 00:05:23.715 SYMLINK libspdk_nvmf.so 00:05:23.975 SYMLINK libspdk_vhost.so 00:05:23.975 LIB libspdk_ftl.a 00:05:24.232 SO libspdk_ftl.so.9.0 00:05:24.491 SYMLINK libspdk_ftl.so 00:05:25.057 CC module/env_dpdk/env_dpdk_rpc.o 00:05:25.057 CC module/keyring/linux/keyring.o 00:05:25.057 CC module/blob/bdev/blob_bdev.o 00:05:25.058 CC module/scheduler/gscheduler/gscheduler.o 00:05:25.058 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:25.058 CC module/fsdev/aio/fsdev_aio.o 00:05:25.058 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:25.058 CC module/sock/posix/posix.o 00:05:25.058 CC module/keyring/file/keyring.o 00:05:25.058 CC module/accel/error/accel_error.o 00:05:25.058 LIB libspdk_env_dpdk_rpc.a 00:05:25.058 SO libspdk_env_dpdk_rpc.so.6.0 00:05:25.058 SYMLINK libspdk_env_dpdk_rpc.so 00:05:25.058 CC module/accel/error/accel_error_rpc.o 00:05:25.058 CC module/keyring/linux/keyring_rpc.o 00:05:25.058 LIB libspdk_scheduler_dpdk_governor.a 00:05:25.058 CC module/keyring/file/keyring_rpc.o 00:05:25.058 LIB libspdk_scheduler_gscheduler.a 00:05:25.350 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:25.350 SO libspdk_scheduler_gscheduler.so.4.0 00:05:25.350 LIB libspdk_scheduler_dynamic.a 00:05:25.350 SO libspdk_scheduler_dynamic.so.4.0 00:05:25.350 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:25.350 SYMLINK libspdk_scheduler_gscheduler.so 00:05:25.350 LIB libspdk_keyring_linux.a 00:05:25.350 LIB libspdk_accel_error.a 00:05:25.350 SYMLINK libspdk_scheduler_dynamic.so 00:05:25.350 LIB libspdk_keyring_file.a 00:05:25.350 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:25.350 LIB libspdk_blob_bdev.a 00:05:25.350 SO libspdk_accel_error.so.2.0 00:05:25.350 SO libspdk_keyring_linux.so.1.0 00:05:25.350 SO libspdk_blob_bdev.so.11.0 00:05:25.350 SO libspdk_keyring_file.so.2.0 00:05:25.350 SYMLINK libspdk_accel_error.so 00:05:25.350 SYMLINK libspdk_keyring_linux.so 00:05:25.350 CC module/fsdev/aio/linux_aio_mgr.o 00:05:25.350 CC module/accel/ioat/accel_ioat.o 00:05:25.350 SYMLINK libspdk_blob_bdev.so 00:05:25.350 CC module/accel/ioat/accel_ioat_rpc.o 00:05:25.350 SYMLINK libspdk_keyring_file.so 00:05:25.609 CC module/accel/dsa/accel_dsa.o 00:05:25.609 CC module/accel/iaa/accel_iaa.o 00:05:25.609 CC module/accel/iaa/accel_iaa_rpc.o 00:05:25.609 CC module/accel/dsa/accel_dsa_rpc.o 00:05:25.609 LIB libspdk_accel_ioat.a 00:05:25.609 SO libspdk_accel_ioat.so.6.0 00:05:25.868 CC module/bdev/delay/vbdev_delay.o 00:05:25.868 CC module/blobfs/bdev/blobfs_bdev.o 00:05:25.868 LIB libspdk_accel_iaa.a 00:05:25.868 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:25.868 SO libspdk_accel_iaa.so.3.0 00:05:25.868 SYMLINK libspdk_accel_ioat.so 00:05:25.868 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:25.868 LIB libspdk_accel_dsa.a 00:05:25.868 CC module/bdev/gpt/gpt.o 00:05:25.868 CC module/bdev/error/vbdev_error.o 00:05:25.868 LIB libspdk_fsdev_aio.a 00:05:25.868 SYMLINK libspdk_accel_iaa.so 00:05:25.868 SO libspdk_accel_dsa.so.5.0 00:05:25.868 SO libspdk_fsdev_aio.so.1.0 00:05:25.868 LIB libspdk_sock_posix.a 00:05:25.868 SYMLINK libspdk_accel_dsa.so 00:05:26.127 SYMLINK libspdk_fsdev_aio.so 00:05:26.127 SO libspdk_sock_posix.so.6.0 00:05:26.127 CC module/bdev/error/vbdev_error_rpc.o 00:05:26.127 LIB libspdk_blobfs_bdev.a 00:05:26.127 SO libspdk_blobfs_bdev.so.6.0 00:05:26.127 SYMLINK libspdk_sock_posix.so 00:05:26.127 CC module/bdev/gpt/vbdev_gpt.o 00:05:26.127 CC module/bdev/lvol/vbdev_lvol.o 00:05:26.127 SYMLINK libspdk_blobfs_bdev.so 00:05:26.127 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:26.127 CC module/bdev/malloc/bdev_malloc.o 00:05:26.128 CC module/bdev/null/bdev_null.o 00:05:26.128 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:26.128 CC module/bdev/nvme/bdev_nvme.o 00:05:26.128 LIB libspdk_bdev_error.a 00:05:26.386 SO libspdk_bdev_error.so.6.0 00:05:26.386 CC module/bdev/passthru/vbdev_passthru.o 00:05:26.386 LIB libspdk_bdev_delay.a 00:05:26.386 SO libspdk_bdev_delay.so.6.0 00:05:26.386 SYMLINK libspdk_bdev_error.so 00:05:26.386 CC module/bdev/null/bdev_null_rpc.o 00:05:26.386 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:26.386 SYMLINK libspdk_bdev_delay.so 00:05:26.386 LIB libspdk_bdev_gpt.a 00:05:26.386 SO libspdk_bdev_gpt.so.6.0 00:05:26.645 SYMLINK libspdk_bdev_gpt.so 00:05:26.645 CC module/bdev/nvme/nvme_rpc.o 00:05:26.645 LIB libspdk_bdev_null.a 00:05:26.645 CC module/bdev/nvme/bdev_mdns_client.o 00:05:26.645 CC module/bdev/raid/bdev_raid.o 00:05:26.645 SO libspdk_bdev_null.so.6.0 00:05:26.645 LIB libspdk_bdev_malloc.a 00:05:26.645 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:26.645 SO libspdk_bdev_malloc.so.6.0 00:05:26.645 SYMLINK libspdk_bdev_null.so 00:05:26.645 CC module/bdev/split/vbdev_split.o 00:05:26.645 LIB libspdk_bdev_lvol.a 00:05:26.645 SYMLINK libspdk_bdev_malloc.so 00:05:26.904 SO libspdk_bdev_lvol.so.6.0 00:05:26.904 CC module/bdev/nvme/vbdev_opal.o 00:05:26.904 LIB libspdk_bdev_passthru.a 00:05:26.904 CC module/bdev/raid/bdev_raid_rpc.o 00:05:26.904 SYMLINK libspdk_bdev_lvol.so 00:05:26.904 SO libspdk_bdev_passthru.so.6.0 00:05:26.904 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:26.904 CC module/bdev/aio/bdev_aio.o 00:05:26.904 SYMLINK libspdk_bdev_passthru.so 00:05:26.904 CC module/bdev/aio/bdev_aio_rpc.o 00:05:26.904 CC module/bdev/split/vbdev_split_rpc.o 00:05:27.164 CC module/bdev/ftl/bdev_ftl.o 00:05:27.164 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:27.164 LIB libspdk_bdev_split.a 00:05:27.164 SO libspdk_bdev_split.so.6.0 00:05:27.422 CC module/bdev/raid/bdev_raid_sb.o 00:05:27.422 SYMLINK libspdk_bdev_split.so 00:05:27.422 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:27.422 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:27.422 CC module/bdev/iscsi/bdev_iscsi.o 00:05:27.422 LIB libspdk_bdev_aio.a 00:05:27.423 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:27.423 SO libspdk_bdev_aio.so.6.0 00:05:27.423 LIB libspdk_bdev_ftl.a 00:05:27.423 LIB libspdk_bdev_zone_block.a 00:05:27.682 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:27.682 SO libspdk_bdev_ftl.so.6.0 00:05:27.682 SO libspdk_bdev_zone_block.so.6.0 00:05:27.682 SYMLINK libspdk_bdev_aio.so 00:05:27.682 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:27.682 CC module/bdev/raid/raid0.o 00:05:27.682 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:27.682 SYMLINK libspdk_bdev_ftl.so 00:05:27.682 CC module/bdev/raid/raid1.o 00:05:27.682 SYMLINK libspdk_bdev_zone_block.so 00:05:27.682 CC module/bdev/raid/concat.o 00:05:27.682 CC module/bdev/raid/raid5f.o 00:05:27.941 LIB libspdk_bdev_iscsi.a 00:05:27.941 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:27.941 SO libspdk_bdev_iscsi.so.6.0 00:05:27.941 SYMLINK libspdk_bdev_iscsi.so 00:05:28.200 LIB libspdk_bdev_virtio.a 00:05:28.200 SO libspdk_bdev_virtio.so.6.0 00:05:28.458 LIB libspdk_bdev_raid.a 00:05:28.458 SYMLINK libspdk_bdev_virtio.so 00:05:28.458 SO libspdk_bdev_raid.so.6.0 00:05:28.458 SYMLINK libspdk_bdev_raid.so 00:05:29.849 LIB libspdk_bdev_nvme.a 00:05:29.849 SO libspdk_bdev_nvme.so.7.1 00:05:29.849 SYMLINK libspdk_bdev_nvme.so 00:05:30.439 CC module/event/subsystems/iobuf/iobuf.o 00:05:30.439 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:30.439 CC module/event/subsystems/vmd/vmd.o 00:05:30.439 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:30.439 CC module/event/subsystems/fsdev/fsdev.o 00:05:30.439 CC module/event/subsystems/scheduler/scheduler.o 00:05:30.439 CC module/event/subsystems/sock/sock.o 00:05:30.439 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:30.439 CC module/event/subsystems/keyring/keyring.o 00:05:30.705 LIB libspdk_event_vhost_blk.a 00:05:30.705 LIB libspdk_event_fsdev.a 00:05:30.705 LIB libspdk_event_keyring.a 00:05:30.705 LIB libspdk_event_scheduler.a 00:05:30.705 LIB libspdk_event_vmd.a 00:05:30.705 SO libspdk_event_vhost_blk.so.3.0 00:05:30.705 LIB libspdk_event_sock.a 00:05:30.705 LIB libspdk_event_iobuf.a 00:05:30.705 SO libspdk_event_fsdev.so.1.0 00:05:30.705 SO libspdk_event_keyring.so.1.0 00:05:30.705 SO libspdk_event_scheduler.so.4.0 00:05:30.705 SO libspdk_event_vmd.so.6.0 00:05:30.705 SO libspdk_event_sock.so.5.0 00:05:30.705 SO libspdk_event_iobuf.so.3.0 00:05:30.705 SYMLINK libspdk_event_vhost_blk.so 00:05:30.705 SYMLINK libspdk_event_fsdev.so 00:05:30.705 SYMLINK libspdk_event_keyring.so 00:05:30.705 SYMLINK libspdk_event_scheduler.so 00:05:30.705 SYMLINK libspdk_event_sock.so 00:05:30.705 SYMLINK libspdk_event_vmd.so 00:05:30.705 SYMLINK libspdk_event_iobuf.so 00:05:30.968 CC module/event/subsystems/accel/accel.o 00:05:31.227 LIB libspdk_event_accel.a 00:05:31.227 SO libspdk_event_accel.so.6.0 00:05:31.227 SYMLINK libspdk_event_accel.so 00:05:31.487 CC module/event/subsystems/bdev/bdev.o 00:05:31.746 LIB libspdk_event_bdev.a 00:05:31.746 SO libspdk_event_bdev.so.6.0 00:05:31.746 SYMLINK libspdk_event_bdev.so 00:05:32.005 CC module/event/subsystems/scsi/scsi.o 00:05:32.005 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:32.005 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:32.005 CC module/event/subsystems/nbd/nbd.o 00:05:32.005 CC module/event/subsystems/ublk/ublk.o 00:05:32.264 LIB libspdk_event_nbd.a 00:05:32.264 LIB libspdk_event_ublk.a 00:05:32.264 LIB libspdk_event_scsi.a 00:05:32.264 SO libspdk_event_ublk.so.3.0 00:05:32.264 SO libspdk_event_nbd.so.6.0 00:05:32.264 SO libspdk_event_scsi.so.6.0 00:05:32.264 LIB libspdk_event_nvmf.a 00:05:32.523 SYMLINK libspdk_event_nbd.so 00:05:32.523 SYMLINK libspdk_event_ublk.so 00:05:32.523 SYMLINK libspdk_event_scsi.so 00:05:32.523 SO libspdk_event_nvmf.so.6.0 00:05:32.523 SYMLINK libspdk_event_nvmf.so 00:05:32.523 CC module/event/subsystems/iscsi/iscsi.o 00:05:32.782 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:32.782 LIB libspdk_event_vhost_scsi.a 00:05:32.782 LIB libspdk_event_iscsi.a 00:05:32.782 SO libspdk_event_vhost_scsi.so.3.0 00:05:32.782 SO libspdk_event_iscsi.so.6.0 00:05:33.041 SYMLINK libspdk_event_vhost_scsi.so 00:05:33.041 SYMLINK libspdk_event_iscsi.so 00:05:33.041 SO libspdk.so.6.0 00:05:33.041 SYMLINK libspdk.so 00:05:33.300 CC app/trace_record/trace_record.o 00:05:33.300 CXX app/trace/trace.o 00:05:33.300 TEST_HEADER include/spdk/accel.h 00:05:33.300 TEST_HEADER include/spdk/accel_module.h 00:05:33.300 TEST_HEADER include/spdk/assert.h 00:05:33.300 TEST_HEADER include/spdk/barrier.h 00:05:33.300 TEST_HEADER include/spdk/base64.h 00:05:33.300 TEST_HEADER include/spdk/bdev.h 00:05:33.300 TEST_HEADER include/spdk/bdev_module.h 00:05:33.559 TEST_HEADER include/spdk/bdev_zone.h 00:05:33.559 TEST_HEADER include/spdk/bit_array.h 00:05:33.559 TEST_HEADER include/spdk/bit_pool.h 00:05:33.559 TEST_HEADER include/spdk/blob_bdev.h 00:05:33.559 CC app/nvmf_tgt/nvmf_main.o 00:05:33.559 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:33.559 TEST_HEADER include/spdk/blobfs.h 00:05:33.559 TEST_HEADER include/spdk/blob.h 00:05:33.559 TEST_HEADER include/spdk/conf.h 00:05:33.559 TEST_HEADER include/spdk/config.h 00:05:33.559 CC app/iscsi_tgt/iscsi_tgt.o 00:05:33.559 TEST_HEADER include/spdk/cpuset.h 00:05:33.559 TEST_HEADER include/spdk/crc16.h 00:05:33.559 TEST_HEADER include/spdk/crc32.h 00:05:33.559 TEST_HEADER include/spdk/crc64.h 00:05:33.559 TEST_HEADER include/spdk/dif.h 00:05:33.559 TEST_HEADER include/spdk/dma.h 00:05:33.559 TEST_HEADER include/spdk/endian.h 00:05:33.559 TEST_HEADER include/spdk/env_dpdk.h 00:05:33.559 TEST_HEADER include/spdk/env.h 00:05:33.559 TEST_HEADER include/spdk/event.h 00:05:33.559 TEST_HEADER include/spdk/fd_group.h 00:05:33.559 TEST_HEADER include/spdk/fd.h 00:05:33.559 TEST_HEADER include/spdk/file.h 00:05:33.559 TEST_HEADER include/spdk/fsdev.h 00:05:33.559 CC app/spdk_tgt/spdk_tgt.o 00:05:33.559 TEST_HEADER include/spdk/fsdev_module.h 00:05:33.559 TEST_HEADER include/spdk/ftl.h 00:05:33.559 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:33.559 TEST_HEADER include/spdk/gpt_spec.h 00:05:33.559 TEST_HEADER include/spdk/hexlify.h 00:05:33.559 TEST_HEADER include/spdk/histogram_data.h 00:05:33.559 CC examples/util/zipf/zipf.o 00:05:33.559 TEST_HEADER include/spdk/idxd.h 00:05:33.559 TEST_HEADER include/spdk/idxd_spec.h 00:05:33.559 TEST_HEADER include/spdk/init.h 00:05:33.559 TEST_HEADER include/spdk/ioat.h 00:05:33.559 CC test/thread/poller_perf/poller_perf.o 00:05:33.559 TEST_HEADER include/spdk/ioat_spec.h 00:05:33.559 TEST_HEADER include/spdk/iscsi_spec.h 00:05:33.559 TEST_HEADER include/spdk/json.h 00:05:33.559 TEST_HEADER include/spdk/jsonrpc.h 00:05:33.559 TEST_HEADER include/spdk/keyring.h 00:05:33.559 TEST_HEADER include/spdk/keyring_module.h 00:05:33.559 TEST_HEADER include/spdk/likely.h 00:05:33.559 TEST_HEADER include/spdk/log.h 00:05:33.559 TEST_HEADER include/spdk/lvol.h 00:05:33.559 CC test/app/bdev_svc/bdev_svc.o 00:05:33.559 TEST_HEADER include/spdk/md5.h 00:05:33.559 CC test/dma/test_dma/test_dma.o 00:05:33.559 TEST_HEADER include/spdk/memory.h 00:05:33.559 TEST_HEADER include/spdk/mmio.h 00:05:33.559 TEST_HEADER include/spdk/nbd.h 00:05:33.559 TEST_HEADER include/spdk/net.h 00:05:33.559 TEST_HEADER include/spdk/notify.h 00:05:33.559 TEST_HEADER include/spdk/nvme.h 00:05:33.559 TEST_HEADER include/spdk/nvme_intel.h 00:05:33.559 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:33.559 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:33.559 TEST_HEADER include/spdk/nvme_spec.h 00:05:33.559 TEST_HEADER include/spdk/nvme_zns.h 00:05:33.559 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:33.559 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:33.559 TEST_HEADER include/spdk/nvmf.h 00:05:33.559 TEST_HEADER include/spdk/nvmf_spec.h 00:05:33.559 TEST_HEADER include/spdk/nvmf_transport.h 00:05:33.559 TEST_HEADER include/spdk/opal.h 00:05:33.559 TEST_HEADER include/spdk/opal_spec.h 00:05:33.559 TEST_HEADER include/spdk/pci_ids.h 00:05:33.559 TEST_HEADER include/spdk/pipe.h 00:05:33.559 TEST_HEADER include/spdk/queue.h 00:05:33.559 TEST_HEADER include/spdk/reduce.h 00:05:33.559 TEST_HEADER include/spdk/rpc.h 00:05:33.559 TEST_HEADER include/spdk/scheduler.h 00:05:33.559 TEST_HEADER include/spdk/scsi.h 00:05:33.559 TEST_HEADER include/spdk/scsi_spec.h 00:05:33.559 TEST_HEADER include/spdk/sock.h 00:05:33.559 TEST_HEADER include/spdk/stdinc.h 00:05:33.559 TEST_HEADER include/spdk/string.h 00:05:33.559 TEST_HEADER include/spdk/thread.h 00:05:33.559 TEST_HEADER include/spdk/trace.h 00:05:33.559 TEST_HEADER include/spdk/trace_parser.h 00:05:33.559 TEST_HEADER include/spdk/tree.h 00:05:33.559 TEST_HEADER include/spdk/ublk.h 00:05:33.559 TEST_HEADER include/spdk/util.h 00:05:33.559 TEST_HEADER include/spdk/uuid.h 00:05:33.559 TEST_HEADER include/spdk/version.h 00:05:33.559 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:33.559 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:33.559 TEST_HEADER include/spdk/vhost.h 00:05:33.559 TEST_HEADER include/spdk/vmd.h 00:05:33.559 TEST_HEADER include/spdk/xor.h 00:05:33.559 TEST_HEADER include/spdk/zipf.h 00:05:33.559 CXX test/cpp_headers/accel.o 00:05:33.559 LINK nvmf_tgt 00:05:33.818 LINK iscsi_tgt 00:05:33.818 LINK zipf 00:05:33.818 LINK spdk_trace_record 00:05:33.818 LINK poller_perf 00:05:33.818 LINK bdev_svc 00:05:33.818 LINK spdk_tgt 00:05:33.818 CXX test/cpp_headers/accel_module.o 00:05:33.818 LINK spdk_trace 00:05:33.818 CXX test/cpp_headers/assert.o 00:05:34.077 CXX test/cpp_headers/barrier.o 00:05:34.077 CXX test/cpp_headers/base64.o 00:05:34.077 CC examples/ioat/perf/perf.o 00:05:34.077 CC examples/vmd/lsvmd/lsvmd.o 00:05:34.077 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:34.077 LINK test_dma 00:05:34.355 CC examples/vmd/led/led.o 00:05:34.355 CC examples/idxd/perf/perf.o 00:05:34.355 CC app/spdk_lspci/spdk_lspci.o 00:05:34.355 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:34.355 CC examples/thread/thread/thread_ex.o 00:05:34.355 LINK lsvmd 00:05:34.355 CXX test/cpp_headers/bdev.o 00:05:34.355 LINK led 00:05:34.355 LINK spdk_lspci 00:05:34.355 LINK interrupt_tgt 00:05:34.355 LINK ioat_perf 00:05:34.614 CXX test/cpp_headers/bdev_module.o 00:05:34.614 LINK thread 00:05:34.614 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:34.614 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:34.614 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:34.614 LINK idxd_perf 00:05:34.614 CC test/app/histogram_perf/histogram_perf.o 00:05:34.614 CC app/spdk_nvme_perf/perf.o 00:05:34.614 CC examples/ioat/verify/verify.o 00:05:34.873 CXX test/cpp_headers/bdev_zone.o 00:05:34.873 CXX test/cpp_headers/bit_array.o 00:05:34.873 LINK nvme_fuzz 00:05:34.873 LINK histogram_perf 00:05:35.131 LINK verify 00:05:35.131 CC examples/sock/hello_world/hello_sock.o 00:05:35.131 CXX test/cpp_headers/bit_pool.o 00:05:35.131 CC test/env/mem_callbacks/mem_callbacks.o 00:05:35.131 CC test/env/vtophys/vtophys.o 00:05:35.131 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:35.131 CC test/env/memory/memory_ut.o 00:05:35.131 LINK vhost_fuzz 00:05:35.131 CXX test/cpp_headers/blob_bdev.o 00:05:35.390 CC app/spdk_nvme_identify/identify.o 00:05:35.390 LINK vtophys 00:05:35.390 LINK env_dpdk_post_init 00:05:35.390 LINK hello_sock 00:05:35.390 CXX test/cpp_headers/blobfs_bdev.o 00:05:35.390 CC app/spdk_nvme_discover/discovery_aer.o 00:05:35.649 CC test/env/pci/pci_ut.o 00:05:35.649 CC test/app/jsoncat/jsoncat.o 00:05:35.649 CXX test/cpp_headers/blobfs.o 00:05:35.649 LINK spdk_nvme_discover 00:05:35.649 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:35.649 LINK jsoncat 00:05:35.649 LINK mem_callbacks 00:05:35.907 LINK spdk_nvme_perf 00:05:35.907 CXX test/cpp_headers/blob.o 00:05:35.907 CXX test/cpp_headers/conf.o 00:05:35.907 CC test/app/stub/stub.o 00:05:36.167 LINK hello_fsdev 00:05:36.167 CC test/event/event_perf/event_perf.o 00:05:36.167 CC examples/accel/perf/accel_perf.o 00:05:36.167 LINK pci_ut 00:05:36.167 CXX test/cpp_headers/config.o 00:05:36.167 CXX test/cpp_headers/cpuset.o 00:05:36.167 LINK stub 00:05:36.167 LINK event_perf 00:05:36.426 CC examples/blob/hello_world/hello_blob.o 00:05:36.426 CXX test/cpp_headers/crc16.o 00:05:36.426 LINK spdk_nvme_identify 00:05:36.426 CC examples/blob/cli/blobcli.o 00:05:36.685 CC test/event/reactor/reactor.o 00:05:36.685 CXX test/cpp_headers/crc32.o 00:05:36.685 LINK hello_blob 00:05:36.685 CC test/event/reactor_perf/reactor_perf.o 00:05:36.685 CC test/nvme/aer/aer.o 00:05:36.685 CC app/spdk_top/spdk_top.o 00:05:36.685 LINK memory_ut 00:05:36.685 LINK reactor 00:05:36.685 CXX test/cpp_headers/crc64.o 00:05:36.685 LINK reactor_perf 00:05:36.685 LINK accel_perf 00:05:36.945 LINK iscsi_fuzz 00:05:36.945 CC app/vhost/vhost.o 00:05:36.945 CXX test/cpp_headers/dif.o 00:05:36.945 LINK aer 00:05:36.945 CC test/rpc_client/rpc_client_test.o 00:05:36.945 LINK blobcli 00:05:36.945 CC test/event/app_repeat/app_repeat.o 00:05:37.204 CC app/spdk_dd/spdk_dd.o 00:05:37.204 LINK vhost 00:05:37.204 CXX test/cpp_headers/dma.o 00:05:37.204 CC app/fio/nvme/fio_plugin.o 00:05:37.204 LINK rpc_client_test 00:05:37.204 LINK app_repeat 00:05:37.204 CC test/nvme/reset/reset.o 00:05:37.462 CXX test/cpp_headers/endian.o 00:05:37.462 CC test/accel/dif/dif.o 00:05:37.462 CC examples/nvme/hello_world/hello_world.o 00:05:37.462 CC examples/nvme/reconnect/reconnect.o 00:05:37.462 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:37.462 LINK spdk_dd 00:05:37.462 CXX test/cpp_headers/env_dpdk.o 00:05:37.462 CC test/event/scheduler/scheduler.o 00:05:37.721 LINK reset 00:05:37.721 LINK hello_world 00:05:37.721 CXX test/cpp_headers/env.o 00:05:37.721 CXX test/cpp_headers/event.o 00:05:37.721 LINK spdk_top 00:05:37.980 LINK scheduler 00:05:37.980 CC test/nvme/sgl/sgl.o 00:05:37.980 LINK spdk_nvme 00:05:37.980 LINK reconnect 00:05:37.980 CXX test/cpp_headers/fd_group.o 00:05:37.980 CXX test/cpp_headers/fd.o 00:05:37.980 CC test/nvme/e2edp/nvme_dp.o 00:05:37.980 CC test/nvme/overhead/overhead.o 00:05:38.238 CC examples/nvme/arbitration/arbitration.o 00:05:38.238 CC app/fio/bdev/fio_plugin.o 00:05:38.238 CXX test/cpp_headers/file.o 00:05:38.238 LINK sgl 00:05:38.238 LINK nvme_manage 00:05:38.238 LINK dif 00:05:38.496 LINK nvme_dp 00:05:38.496 CC test/blobfs/mkfs/mkfs.o 00:05:38.496 CXX test/cpp_headers/fsdev.o 00:05:38.496 CC examples/bdev/hello_world/hello_bdev.o 00:05:38.496 CXX test/cpp_headers/fsdev_module.o 00:05:38.496 LINK overhead 00:05:38.496 CXX test/cpp_headers/ftl.o 00:05:38.754 LINK arbitration 00:05:38.754 LINK mkfs 00:05:38.754 CC examples/bdev/bdevperf/bdevperf.o 00:05:38.754 CC test/nvme/err_injection/err_injection.o 00:05:38.754 CXX test/cpp_headers/fuse_dispatcher.o 00:05:38.754 LINK hello_bdev 00:05:38.754 CC examples/nvme/hotplug/hotplug.o 00:05:38.754 LINK spdk_bdev 00:05:39.014 CXX test/cpp_headers/gpt_spec.o 00:05:39.014 CC test/bdev/bdevio/bdevio.o 00:05:39.014 LINK err_injection 00:05:39.014 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:39.014 CC test/lvol/esnap/esnap.o 00:05:39.014 CC examples/nvme/abort/abort.o 00:05:39.014 CC test/nvme/startup/startup.o 00:05:39.014 LINK hotplug 00:05:39.014 CXX test/cpp_headers/hexlify.o 00:05:39.014 CC test/nvme/reserve/reserve.o 00:05:39.014 LINK cmb_copy 00:05:39.273 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:39.273 CXX test/cpp_headers/histogram_data.o 00:05:39.273 LINK startup 00:05:39.273 CXX test/cpp_headers/idxd.o 00:05:39.273 LINK reserve 00:05:39.273 CC test/nvme/simple_copy/simple_copy.o 00:05:39.273 LINK pmr_persistence 00:05:39.273 CXX test/cpp_headers/idxd_spec.o 00:05:39.531 LINK bdevio 00:05:39.531 LINK abort 00:05:39.531 CC test/nvme/connect_stress/connect_stress.o 00:05:39.531 CC test/nvme/boot_partition/boot_partition.o 00:05:39.531 CXX test/cpp_headers/init.o 00:05:39.531 CC test/nvme/compliance/nvme_compliance.o 00:05:39.531 CXX test/cpp_headers/ioat.o 00:05:39.531 CC test/nvme/fused_ordering/fused_ordering.o 00:05:39.790 LINK simple_copy 00:05:39.790 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:39.790 LINK bdevperf 00:05:39.790 LINK connect_stress 00:05:39.790 LINK boot_partition 00:05:39.790 CXX test/cpp_headers/ioat_spec.o 00:05:39.790 CC test/nvme/fdp/fdp.o 00:05:39.790 LINK fused_ordering 00:05:39.790 CXX test/cpp_headers/iscsi_spec.o 00:05:40.048 LINK doorbell_aers 00:05:40.048 CC test/nvme/cuse/cuse.o 00:05:40.048 CXX test/cpp_headers/json.o 00:05:40.048 CXX test/cpp_headers/jsonrpc.o 00:05:40.048 LINK nvme_compliance 00:05:40.048 CXX test/cpp_headers/keyring.o 00:05:40.048 CXX test/cpp_headers/keyring_module.o 00:05:40.048 CXX test/cpp_headers/likely.o 00:05:40.048 CXX test/cpp_headers/log.o 00:05:40.048 CC examples/nvmf/nvmf/nvmf.o 00:05:40.308 CXX test/cpp_headers/lvol.o 00:05:40.308 CXX test/cpp_headers/md5.o 00:05:40.308 CXX test/cpp_headers/memory.o 00:05:40.308 CXX test/cpp_headers/mmio.o 00:05:40.308 CXX test/cpp_headers/nbd.o 00:05:40.308 LINK fdp 00:05:40.308 CXX test/cpp_headers/net.o 00:05:40.308 CXX test/cpp_headers/notify.o 00:05:40.308 CXX test/cpp_headers/nvme.o 00:05:40.309 CXX test/cpp_headers/nvme_intel.o 00:05:40.570 CXX test/cpp_headers/nvme_ocssd.o 00:05:40.570 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:40.570 CXX test/cpp_headers/nvme_spec.o 00:05:40.570 LINK nvmf 00:05:40.570 CXX test/cpp_headers/nvme_zns.o 00:05:40.570 CXX test/cpp_headers/nvmf_cmd.o 00:05:40.570 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:40.570 CXX test/cpp_headers/nvmf.o 00:05:40.570 CXX test/cpp_headers/nvmf_spec.o 00:05:40.570 CXX test/cpp_headers/nvmf_transport.o 00:05:40.570 CXX test/cpp_headers/opal.o 00:05:40.570 CXX test/cpp_headers/opal_spec.o 00:05:40.830 CXX test/cpp_headers/pci_ids.o 00:05:40.830 CXX test/cpp_headers/pipe.o 00:05:40.830 CXX test/cpp_headers/queue.o 00:05:40.830 CXX test/cpp_headers/reduce.o 00:05:40.830 CXX test/cpp_headers/rpc.o 00:05:40.830 CXX test/cpp_headers/scheduler.o 00:05:40.830 CXX test/cpp_headers/scsi.o 00:05:40.830 CXX test/cpp_headers/scsi_spec.o 00:05:40.830 CXX test/cpp_headers/sock.o 00:05:40.830 CXX test/cpp_headers/stdinc.o 00:05:40.830 CXX test/cpp_headers/string.o 00:05:41.089 CXX test/cpp_headers/thread.o 00:05:41.089 CXX test/cpp_headers/trace.o 00:05:41.089 CXX test/cpp_headers/trace_parser.o 00:05:41.089 CXX test/cpp_headers/tree.o 00:05:41.089 CXX test/cpp_headers/ublk.o 00:05:41.089 CXX test/cpp_headers/util.o 00:05:41.089 CXX test/cpp_headers/uuid.o 00:05:41.089 CXX test/cpp_headers/version.o 00:05:41.089 CXX test/cpp_headers/vfio_user_pci.o 00:05:41.089 CXX test/cpp_headers/vfio_user_spec.o 00:05:41.089 CXX test/cpp_headers/vhost.o 00:05:41.089 CXX test/cpp_headers/vmd.o 00:05:41.348 CXX test/cpp_headers/xor.o 00:05:41.348 CXX test/cpp_headers/zipf.o 00:05:41.606 LINK cuse 00:05:45.802 LINK esnap 00:05:46.371 00:05:46.371 real 1m44.097s 00:05:46.371 user 9m41.995s 00:05:46.371 sys 1m49.921s 00:05:46.371 14:31:45 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:46.371 14:31:45 make -- common/autotest_common.sh@10 -- $ set +x 00:05:46.371 ************************************ 00:05:46.371 END TEST make 00:05:46.371 ************************************ 00:05:46.371 14:31:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:46.371 14:31:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:46.372 14:31:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:46.372 14:31:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:46.372 14:31:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:46.372 14:31:45 -- pm/common@44 -- $ pid=5246 00:05:46.372 14:31:45 -- pm/common@50 -- $ kill -TERM 5246 00:05:46.372 14:31:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:46.372 14:31:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:46.372 14:31:45 -- pm/common@44 -- $ pid=5248 00:05:46.372 14:31:45 -- pm/common@50 -- $ kill -TERM 5248 00:05:46.372 14:31:45 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:46.372 14:31:45 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:46.372 14:31:45 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.372 14:31:45 -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.372 14:31:45 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.636 14:31:45 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.636 14:31:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.636 14:31:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.636 14:31:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.636 14:31:45 -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.636 14:31:45 -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.636 14:31:45 -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.636 14:31:45 -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.636 14:31:45 -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.636 14:31:45 -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.636 14:31:45 -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.636 14:31:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.636 14:31:45 -- scripts/common.sh@344 -- # case "$op" in 00:05:46.636 14:31:45 -- scripts/common.sh@345 -- # : 1 00:05:46.636 14:31:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.636 14:31:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.636 14:31:45 -- scripts/common.sh@365 -- # decimal 1 00:05:46.636 14:31:45 -- scripts/common.sh@353 -- # local d=1 00:05:46.636 14:31:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.636 14:31:45 -- scripts/common.sh@355 -- # echo 1 00:05:46.636 14:31:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.636 14:31:45 -- scripts/common.sh@366 -- # decimal 2 00:05:46.636 14:31:45 -- scripts/common.sh@353 -- # local d=2 00:05:46.636 14:31:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.637 14:31:45 -- scripts/common.sh@355 -- # echo 2 00:05:46.637 14:31:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.637 14:31:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.637 14:31:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.637 14:31:45 -- scripts/common.sh@368 -- # return 0 00:05:46.637 14:31:45 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.637 14:31:45 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.637 --rc genhtml_branch_coverage=1 00:05:46.637 --rc genhtml_function_coverage=1 00:05:46.637 --rc genhtml_legend=1 00:05:46.637 --rc geninfo_all_blocks=1 00:05:46.637 --rc geninfo_unexecuted_blocks=1 00:05:46.637 00:05:46.637 ' 00:05:46.637 14:31:45 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.637 --rc genhtml_branch_coverage=1 00:05:46.637 --rc genhtml_function_coverage=1 00:05:46.637 --rc genhtml_legend=1 00:05:46.637 --rc geninfo_all_blocks=1 00:05:46.637 --rc geninfo_unexecuted_blocks=1 00:05:46.637 00:05:46.637 ' 00:05:46.637 14:31:45 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.637 --rc genhtml_branch_coverage=1 00:05:46.637 --rc genhtml_function_coverage=1 00:05:46.637 --rc genhtml_legend=1 00:05:46.637 --rc geninfo_all_blocks=1 00:05:46.637 --rc geninfo_unexecuted_blocks=1 00:05:46.637 00:05:46.637 ' 00:05:46.637 14:31:45 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.637 --rc genhtml_branch_coverage=1 00:05:46.637 --rc genhtml_function_coverage=1 00:05:46.637 --rc genhtml_legend=1 00:05:46.637 --rc geninfo_all_blocks=1 00:05:46.637 --rc geninfo_unexecuted_blocks=1 00:05:46.637 00:05:46.637 ' 00:05:46.637 14:31:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:46.637 14:31:45 -- nvmf/common.sh@7 -- # uname -s 00:05:46.637 14:31:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.637 14:31:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.637 14:31:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.637 14:31:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.637 14:31:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.637 14:31:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.637 14:31:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.637 14:31:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.637 14:31:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.637 14:31:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.637 14:31:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df2914e2-f71b-4480-87e8-79977859965f 00:05:46.637 14:31:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=df2914e2-f71b-4480-87e8-79977859965f 00:05:46.637 14:31:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.637 14:31:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.637 14:31:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.637 14:31:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.637 14:31:45 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.637 14:31:45 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.637 14:31:45 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.637 14:31:45 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.637 14:31:45 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.637 14:31:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.637 14:31:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.637 14:31:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.637 14:31:45 -- paths/export.sh@5 -- # export PATH 00:05:46.637 14:31:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.637 14:31:45 -- nvmf/common.sh@51 -- # : 0 00:05:46.637 14:31:45 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:46.637 14:31:45 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:46.637 14:31:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.637 14:31:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.637 14:31:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.637 14:31:45 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:46.637 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:46.637 14:31:45 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:46.637 14:31:45 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:46.637 14:31:45 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:46.637 14:31:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:46.637 14:31:45 -- spdk/autotest.sh@32 -- # uname -s 00:05:46.637 14:31:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:46.637 14:31:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:46.637 14:31:45 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:46.637 14:31:45 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:46.637 14:31:45 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:46.637 14:31:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:46.637 14:31:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:46.637 14:31:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:46.637 14:31:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:46.637 14:31:45 -- spdk/autotest.sh@48 -- # udevadm_pid=54376 00:05:46.637 14:31:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:46.637 14:31:45 -- pm/common@17 -- # local monitor 00:05:46.637 14:31:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:46.637 14:31:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:46.637 14:31:45 -- pm/common@25 -- # sleep 1 00:05:46.637 14:31:45 -- pm/common@21 -- # date +%s 00:05:46.637 14:31:45 -- pm/common@21 -- # date +%s 00:05:46.637 14:31:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730730705 00:05:46.637 14:31:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730730705 00:05:46.637 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730730705_collect-cpu-load.pm.log 00:05:46.637 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730730705_collect-vmstat.pm.log 00:05:47.575 14:31:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:47.575 14:31:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:47.575 14:31:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.575 14:31:46 -- common/autotest_common.sh@10 -- # set +x 00:05:47.575 14:31:46 -- spdk/autotest.sh@59 -- # create_test_list 00:05:47.575 14:31:46 -- common/autotest_common.sh@750 -- # xtrace_disable 00:05:47.575 14:31:46 -- common/autotest_common.sh@10 -- # set +x 00:05:47.833 14:31:46 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:47.833 14:31:46 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:47.833 14:31:46 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:47.833 14:31:46 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:47.833 14:31:46 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:47.833 14:31:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:47.833 14:31:46 -- common/autotest_common.sh@1455 -- # uname 00:05:47.833 14:31:46 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:47.833 14:31:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:47.833 14:31:46 -- common/autotest_common.sh@1475 -- # uname 00:05:47.833 14:31:46 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:47.833 14:31:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:47.833 14:31:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:47.833 lcov: LCOV version 1.15 00:05:47.833 14:31:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:05.917 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:05.917 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:24.040 14:32:20 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:24.040 14:32:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:24.040 14:32:20 -- common/autotest_common.sh@10 -- # set +x 00:06:24.040 14:32:20 -- spdk/autotest.sh@78 -- # rm -f 00:06:24.040 14:32:20 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:24.040 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:24.040 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:24.040 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:24.040 14:32:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:24.040 14:32:21 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:24.040 14:32:21 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:24.040 14:32:21 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:24.040 14:32:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:24.040 14:32:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:24.040 14:32:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:24.040 14:32:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:24.040 14:32:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:24.040 14:32:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:24.040 14:32:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:24.040 14:32:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:24.040 14:32:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:24.040 14:32:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:24.040 14:32:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:24.040 14:32:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:24.040 14:32:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:24.040 14:32:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:24.040 14:32:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:24.040 14:32:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:24.040 14:32:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:24.040 14:32:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:24.040 14:32:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:24.040 14:32:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:24.040 14:32:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:24.040 14:32:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:24.040 14:32:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:24.040 14:32:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:24.040 14:32:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:24.040 14:32:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:24.040 No valid GPT data, bailing 00:06:24.040 14:32:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:24.040 14:32:21 -- scripts/common.sh@394 -- # pt= 00:06:24.040 14:32:21 -- scripts/common.sh@395 -- # return 1 00:06:24.040 14:32:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:24.040 1+0 records in 00:06:24.040 1+0 records out 00:06:24.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466146 s, 225 MB/s 00:06:24.041 14:32:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:24.041 14:32:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:24.041 14:32:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:24.041 14:32:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:24.041 14:32:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:24.041 No valid GPT data, bailing 00:06:24.041 14:32:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:24.041 14:32:21 -- scripts/common.sh@394 -- # pt= 00:06:24.041 14:32:21 -- scripts/common.sh@395 -- # return 1 00:06:24.041 14:32:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:24.041 1+0 records in 00:06:24.041 1+0 records out 00:06:24.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005246 s, 200 MB/s 00:06:24.041 14:32:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:24.041 14:32:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:24.041 14:32:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:24.041 14:32:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:24.041 14:32:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:24.041 No valid GPT data, bailing 00:06:24.041 14:32:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:24.041 14:32:21 -- scripts/common.sh@394 -- # pt= 00:06:24.041 14:32:21 -- scripts/common.sh@395 -- # return 1 00:06:24.041 14:32:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:24.041 1+0 records in 00:06:24.041 1+0 records out 00:06:24.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457764 s, 229 MB/s 00:06:24.041 14:32:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:24.041 14:32:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:24.041 14:32:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:24.041 14:32:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:24.041 14:32:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:24.041 No valid GPT data, bailing 00:06:24.041 14:32:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:24.041 14:32:21 -- scripts/common.sh@394 -- # pt= 00:06:24.041 14:32:21 -- scripts/common.sh@395 -- # return 1 00:06:24.041 14:32:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:24.041 1+0 records in 00:06:24.041 1+0 records out 00:06:24.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00366088 s, 286 MB/s 00:06:24.041 14:32:21 -- spdk/autotest.sh@105 -- # sync 00:06:24.041 14:32:21 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:24.041 14:32:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:24.041 14:32:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:24.976 14:32:23 -- spdk/autotest.sh@111 -- # uname -s 00:06:24.976 14:32:23 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:24.976 14:32:23 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:24.976 14:32:23 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:25.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:25.235 Hugepages 00:06:25.235 node hugesize free / total 00:06:25.235 node0 1048576kB 0 / 0 00:06:25.495 node0 2048kB 0 / 0 00:06:25.495 00:06:25.495 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:25.495 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:25.495 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:25.495 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:25.495 14:32:24 -- spdk/autotest.sh@117 -- # uname -s 00:06:25.495 14:32:24 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:25.495 14:32:24 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:25.495 14:32:24 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:26.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:26.429 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:26.429 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:26.429 14:32:25 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:27.364 14:32:26 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:27.364 14:32:26 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:27.364 14:32:26 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:27.364 14:32:26 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:27.364 14:32:26 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:27.364 14:32:26 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:27.364 14:32:26 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:27.364 14:32:26 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:27.364 14:32:26 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:27.364 14:32:26 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:27.364 14:32:26 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:27.364 14:32:26 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:27.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:27.931 Waiting for block devices as requested 00:06:27.931 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.931 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:27.931 14:32:27 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:27.931 14:32:27 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:27.931 14:32:27 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:27.931 14:32:27 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:27.931 14:32:27 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:27.931 14:32:27 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:27.931 14:32:27 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:28.190 14:32:27 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:28.190 14:32:27 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:28.190 14:32:27 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:28.190 14:32:27 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:28.190 14:32:27 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:28.190 14:32:27 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:28.190 14:32:27 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:28.190 14:32:27 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:28.190 14:32:27 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:28.190 14:32:27 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:28.190 14:32:27 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:28.190 14:32:27 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:28.190 14:32:27 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:28.190 14:32:27 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:28.190 14:32:27 -- common/autotest_common.sh@1541 -- # continue 00:06:28.190 14:32:27 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:28.190 14:32:27 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:28.190 14:32:27 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:28.190 14:32:27 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:28.190 14:32:27 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:28.190 14:32:27 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:28.190 14:32:27 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:28.190 14:32:27 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:28.190 14:32:27 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:28.190 14:32:27 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:28.190 14:32:27 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:28.190 14:32:27 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:28.190 14:32:27 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:28.190 14:32:27 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:28.190 14:32:27 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:28.190 14:32:27 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:28.190 14:32:27 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:28.190 14:32:27 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:28.190 14:32:27 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:28.190 14:32:27 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:28.190 14:32:27 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:28.190 14:32:27 -- common/autotest_common.sh@1541 -- # continue 00:06:28.190 14:32:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:28.190 14:32:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.190 14:32:27 -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 14:32:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:28.190 14:32:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.190 14:32:27 -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 14:32:27 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:28.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:29.015 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:29.015 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:29.015 14:32:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:29.015 14:32:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.015 14:32:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.015 14:32:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:29.015 14:32:28 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:29.015 14:32:28 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:29.015 14:32:28 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:29.015 14:32:28 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:29.015 14:32:28 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:29.015 14:32:28 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:29.015 14:32:28 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:29.015 14:32:28 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:29.015 14:32:28 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:29.015 14:32:28 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:29.015 14:32:28 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:29.015 14:32:28 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:29.015 14:32:28 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:29.015 14:32:28 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:29.015 14:32:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:29.015 14:32:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:29.015 14:32:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:29.015 14:32:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:29.016 14:32:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:29.016 14:32:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:29.016 14:32:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:29.016 14:32:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:29.016 14:32:28 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:29.016 14:32:28 -- common/autotest_common.sh@1570 -- # return 0 00:06:29.016 14:32:28 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:29.016 14:32:28 -- common/autotest_common.sh@1578 -- # return 0 00:06:29.016 14:32:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:29.016 14:32:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:29.016 14:32:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:29.016 14:32:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:29.016 14:32:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:29.016 14:32:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.016 14:32:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.016 14:32:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:29.016 14:32:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:29.016 14:32:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.016 14:32:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.016 14:32:28 -- common/autotest_common.sh@10 -- # set +x 00:06:29.279 ************************************ 00:06:29.279 START TEST env 00:06:29.279 ************************************ 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:29.279 * Looking for test storage... 00:06:29.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1691 -- # lcov --version 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:29.279 14:32:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.279 14:32:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.279 14:32:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.279 14:32:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.279 14:32:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.279 14:32:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.279 14:32:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.279 14:32:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.279 14:32:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.279 14:32:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.279 14:32:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.279 14:32:28 env -- scripts/common.sh@344 -- # case "$op" in 00:06:29.279 14:32:28 env -- scripts/common.sh@345 -- # : 1 00:06:29.279 14:32:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.279 14:32:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.279 14:32:28 env -- scripts/common.sh@365 -- # decimal 1 00:06:29.279 14:32:28 env -- scripts/common.sh@353 -- # local d=1 00:06:29.279 14:32:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.279 14:32:28 env -- scripts/common.sh@355 -- # echo 1 00:06:29.279 14:32:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.279 14:32:28 env -- scripts/common.sh@366 -- # decimal 2 00:06:29.279 14:32:28 env -- scripts/common.sh@353 -- # local d=2 00:06:29.279 14:32:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.279 14:32:28 env -- scripts/common.sh@355 -- # echo 2 00:06:29.279 14:32:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.279 14:32:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.279 14:32:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.279 14:32:28 env -- scripts/common.sh@368 -- # return 0 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:29.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.279 --rc genhtml_branch_coverage=1 00:06:29.279 --rc genhtml_function_coverage=1 00:06:29.279 --rc genhtml_legend=1 00:06:29.279 --rc geninfo_all_blocks=1 00:06:29.279 --rc geninfo_unexecuted_blocks=1 00:06:29.279 00:06:29.279 ' 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:29.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.279 --rc genhtml_branch_coverage=1 00:06:29.279 --rc genhtml_function_coverage=1 00:06:29.279 --rc genhtml_legend=1 00:06:29.279 --rc geninfo_all_blocks=1 00:06:29.279 --rc geninfo_unexecuted_blocks=1 00:06:29.279 00:06:29.279 ' 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:29.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.279 --rc genhtml_branch_coverage=1 00:06:29.279 --rc genhtml_function_coverage=1 00:06:29.279 --rc genhtml_legend=1 00:06:29.279 --rc geninfo_all_blocks=1 00:06:29.279 --rc geninfo_unexecuted_blocks=1 00:06:29.279 00:06:29.279 ' 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:29.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.279 --rc genhtml_branch_coverage=1 00:06:29.279 --rc genhtml_function_coverage=1 00:06:29.279 --rc genhtml_legend=1 00:06:29.279 --rc geninfo_all_blocks=1 00:06:29.279 --rc geninfo_unexecuted_blocks=1 00:06:29.279 00:06:29.279 ' 00:06:29.279 14:32:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.279 14:32:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.279 14:32:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:29.279 ************************************ 00:06:29.279 START TEST env_memory 00:06:29.279 ************************************ 00:06:29.279 14:32:28 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:29.279 00:06:29.279 00:06:29.279 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.279 http://cunit.sourceforge.net/ 00:06:29.279 00:06:29.279 00:06:29.279 Suite: memory 00:06:29.551 Test: alloc and free memory map ...[2024-11-04 14:32:28.419580] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:29.551 passed 00:06:29.551 Test: mem map translation ...[2024-11-04 14:32:28.481174] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:29.551 [2024-11-04 14:32:28.481500] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:29.551 [2024-11-04 14:32:28.481755] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:29.551 [2024-11-04 14:32:28.482051] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:29.551 passed 00:06:29.551 Test: mem map registration ...[2024-11-04 14:32:28.581151] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:29.551 [2024-11-04 14:32:28.581258] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:29.551 passed 00:06:29.810 Test: mem map adjacent registrations ...passed 00:06:29.810 00:06:29.810 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.810 suites 1 1 n/a 0 0 00:06:29.810 tests 4 4 4 0 0 00:06:29.810 asserts 152 152 152 0 n/a 00:06:29.810 00:06:29.810 Elapsed time = 0.339 seconds 00:06:29.810 ************************************ 00:06:29.811 END TEST env_memory 00:06:29.811 ************************************ 00:06:29.811 00:06:29.811 real 0m0.383s 00:06:29.811 user 0m0.347s 00:06:29.811 sys 0m0.026s 00:06:29.811 14:32:28 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.811 14:32:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:29.811 14:32:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:29.811 14:32:28 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.811 14:32:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.811 14:32:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:29.811 ************************************ 00:06:29.811 START TEST env_vtophys 00:06:29.811 ************************************ 00:06:29.811 14:32:28 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:29.811 EAL: lib.eal log level changed from notice to debug 00:06:29.811 EAL: Detected lcore 0 as core 0 on socket 0 00:06:29.811 EAL: Detected lcore 1 as core 0 on socket 0 00:06:29.811 EAL: Detected lcore 2 as core 0 on socket 0 00:06:29.811 EAL: Detected lcore 3 as core 0 on socket 0 00:06:29.811 EAL: Detected lcore 4 as core 0 on socket 0 00:06:29.811 EAL: Detected lcore 5 as core 0 on socket 0 00:06:29.811 EAL: Detected lcore 6 as core 0 on socket 0 00:06:29.811 EAL: Detected lcore 7 as core 0 on socket 0 00:06:29.811 EAL: Detected lcore 8 as core 0 on socket 0 00:06:29.811 EAL: Detected lcore 9 as core 0 on socket 0 00:06:29.811 EAL: Maximum logical cores by configuration: 128 00:06:29.811 EAL: Detected CPU lcores: 10 00:06:29.811 EAL: Detected NUMA nodes: 1 00:06:29.811 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:29.811 EAL: Detected shared linkage of DPDK 00:06:29.811 EAL: No shared files mode enabled, IPC will be disabled 00:06:29.811 EAL: Selected IOVA mode 'PA' 00:06:29.811 EAL: Probing VFIO support... 00:06:29.811 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:29.811 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:29.811 EAL: Ask a virtual area of 0x2e000 bytes 00:06:29.811 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:29.811 EAL: Setting up physically contiguous memory... 00:06:29.811 EAL: Setting maximum number of open files to 524288 00:06:29.811 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:29.811 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:29.811 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.811 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:29.811 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.811 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.811 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:29.811 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:29.811 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.811 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:29.811 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.811 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.811 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:29.811 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:29.811 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.811 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:29.811 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.811 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.811 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:29.811 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:29.811 EAL: Ask a virtual area of 0x61000 bytes 00:06:29.811 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:29.811 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:29.811 EAL: Ask a virtual area of 0x400000000 bytes 00:06:29.811 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:29.811 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:29.811 EAL: Hugepages will be freed exactly as allocated. 00:06:29.811 EAL: No shared files mode enabled, IPC is disabled 00:06:29.811 EAL: No shared files mode enabled, IPC is disabled 00:06:30.070 EAL: TSC frequency is ~2200000 KHz 00:06:30.070 EAL: Main lcore 0 is ready (tid=7f4c88329a40;cpuset=[0]) 00:06:30.070 EAL: Trying to obtain current memory policy. 00:06:30.070 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.070 EAL: Restoring previous memory policy: 0 00:06:30.070 EAL: request: mp_malloc_sync 00:06:30.070 EAL: No shared files mode enabled, IPC is disabled 00:06:30.070 EAL: Heap on socket 0 was expanded by 2MB 00:06:30.070 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:30.070 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:30.070 EAL: Mem event callback 'spdk:(nil)' registered 00:06:30.070 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:30.070 00:06:30.070 00:06:30.070 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.070 http://cunit.sourceforge.net/ 00:06:30.070 00:06:30.070 00:06:30.070 Suite: components_suite 00:06:30.636 Test: vtophys_malloc_test ...passed 00:06:30.636 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:30.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.636 EAL: Restoring previous memory policy: 4 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was expanded by 4MB 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was shrunk by 4MB 00:06:30.636 EAL: Trying to obtain current memory policy. 00:06:30.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.636 EAL: Restoring previous memory policy: 4 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was expanded by 6MB 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was shrunk by 6MB 00:06:30.636 EAL: Trying to obtain current memory policy. 00:06:30.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.636 EAL: Restoring previous memory policy: 4 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was expanded by 10MB 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was shrunk by 10MB 00:06:30.636 EAL: Trying to obtain current memory policy. 00:06:30.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.636 EAL: Restoring previous memory policy: 4 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was expanded by 18MB 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was shrunk by 18MB 00:06:30.636 EAL: Trying to obtain current memory policy. 00:06:30.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.636 EAL: Restoring previous memory policy: 4 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was expanded by 34MB 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was shrunk by 34MB 00:06:30.636 EAL: Trying to obtain current memory policy. 00:06:30.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.636 EAL: Restoring previous memory policy: 4 00:06:30.636 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.636 EAL: request: mp_malloc_sync 00:06:30.636 EAL: No shared files mode enabled, IPC is disabled 00:06:30.636 EAL: Heap on socket 0 was expanded by 66MB 00:06:30.894 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.894 EAL: request: mp_malloc_sync 00:06:30.894 EAL: No shared files mode enabled, IPC is disabled 00:06:30.894 EAL: Heap on socket 0 was shrunk by 66MB 00:06:30.894 EAL: Trying to obtain current memory policy. 00:06:30.894 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:30.894 EAL: Restoring previous memory policy: 4 00:06:30.894 EAL: Calling mem event callback 'spdk:(nil)' 00:06:30.894 EAL: request: mp_malloc_sync 00:06:30.894 EAL: No shared files mode enabled, IPC is disabled 00:06:30.894 EAL: Heap on socket 0 was expanded by 130MB 00:06:31.150 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.150 EAL: request: mp_malloc_sync 00:06:31.150 EAL: No shared files mode enabled, IPC is disabled 00:06:31.150 EAL: Heap on socket 0 was shrunk by 130MB 00:06:31.409 EAL: Trying to obtain current memory policy. 00:06:31.409 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:31.409 EAL: Restoring previous memory policy: 4 00:06:31.409 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.409 EAL: request: mp_malloc_sync 00:06:31.409 EAL: No shared files mode enabled, IPC is disabled 00:06:31.409 EAL: Heap on socket 0 was expanded by 258MB 00:06:31.977 EAL: Calling mem event callback 'spdk:(nil)' 00:06:31.977 EAL: request: mp_malloc_sync 00:06:31.977 EAL: No shared files mode enabled, IPC is disabled 00:06:31.977 EAL: Heap on socket 0 was shrunk by 258MB 00:06:32.236 EAL: Trying to obtain current memory policy. 00:06:32.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:32.495 EAL: Restoring previous memory policy: 4 00:06:32.495 EAL: Calling mem event callback 'spdk:(nil)' 00:06:32.495 EAL: request: mp_malloc_sync 00:06:32.495 EAL: No shared files mode enabled, IPC is disabled 00:06:32.495 EAL: Heap on socket 0 was expanded by 514MB 00:06:33.430 EAL: Calling mem event callback 'spdk:(nil)' 00:06:33.430 EAL: request: mp_malloc_sync 00:06:33.430 EAL: No shared files mode enabled, IPC is disabled 00:06:33.430 EAL: Heap on socket 0 was shrunk by 514MB 00:06:34.365 EAL: Trying to obtain current memory policy. 00:06:34.365 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:34.365 EAL: Restoring previous memory policy: 4 00:06:34.365 EAL: Calling mem event callback 'spdk:(nil)' 00:06:34.365 EAL: request: mp_malloc_sync 00:06:34.365 EAL: No shared files mode enabled, IPC is disabled 00:06:34.365 EAL: Heap on socket 0 was expanded by 1026MB 00:06:36.267 EAL: Calling mem event callback 'spdk:(nil)' 00:06:36.267 EAL: request: mp_malloc_sync 00:06:36.267 EAL: No shared files mode enabled, IPC is disabled 00:06:36.267 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:37.683 passed 00:06:37.683 00:06:37.683 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.683 suites 1 1 n/a 0 0 00:06:37.683 tests 2 2 2 0 0 00:06:37.683 asserts 5621 5621 5621 0 n/a 00:06:37.683 00:06:37.683 Elapsed time = 7.711 seconds 00:06:37.683 EAL: Calling mem event callback 'spdk:(nil)' 00:06:37.683 EAL: request: mp_malloc_sync 00:06:37.683 EAL: No shared files mode enabled, IPC is disabled 00:06:37.683 EAL: Heap on socket 0 was shrunk by 2MB 00:06:37.683 EAL: No shared files mode enabled, IPC is disabled 00:06:37.683 EAL: No shared files mode enabled, IPC is disabled 00:06:37.683 EAL: No shared files mode enabled, IPC is disabled 00:06:37.942 00:06:37.942 real 0m8.061s 00:06:37.942 user 0m6.828s 00:06:37.942 sys 0m1.052s 00:06:37.942 ************************************ 00:06:37.942 END TEST env_vtophys 00:06:37.942 ************************************ 00:06:37.942 14:32:36 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.942 14:32:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:37.942 14:32:36 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:37.942 14:32:36 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:37.942 14:32:36 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.942 14:32:36 env -- common/autotest_common.sh@10 -- # set +x 00:06:37.942 ************************************ 00:06:37.942 START TEST env_pci 00:06:37.942 ************************************ 00:06:37.942 14:32:36 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:37.942 00:06:37.942 00:06:37.942 CUnit - A unit testing framework for C - Version 2.1-3 00:06:37.942 http://cunit.sourceforge.net/ 00:06:37.942 00:06:37.942 00:06:37.942 Suite: pci 00:06:37.942 Test: pci_hook ...[2024-11-04 14:32:36.934193] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56704 has claimed it 00:06:37.942 passed 00:06:37.942 00:06:37.942 Run Summary: Type Total Ran Passed Failed Inactive 00:06:37.942 suites 1 1 n/a 0 0 00:06:37.942 tests 1 1 1 0 0 00:06:37.942 asserts 25 25 25 0 n/a 00:06:37.942 00:06:37.942 Elapsed time = 0.008 seconds 00:06:37.942 EAL: Cannot find device (10000:00:01.0) 00:06:37.942 EAL: Failed to attach device on primary process 00:06:37.942 00:06:37.942 real 0m0.089s 00:06:37.942 user 0m0.041s 00:06:37.942 sys 0m0.046s 00:06:37.942 14:32:36 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:37.942 14:32:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:37.942 ************************************ 00:06:37.942 END TEST env_pci 00:06:37.942 ************************************ 00:06:37.942 14:32:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:37.942 14:32:37 env -- env/env.sh@15 -- # uname 00:06:37.942 14:32:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:37.942 14:32:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:37.942 14:32:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:37.942 14:32:37 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:37.942 14:32:37 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.942 14:32:37 env -- common/autotest_common.sh@10 -- # set +x 00:06:37.942 ************************************ 00:06:37.942 START TEST env_dpdk_post_init 00:06:37.942 ************************************ 00:06:37.942 14:32:37 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:38.201 EAL: Detected CPU lcores: 10 00:06:38.201 EAL: Detected NUMA nodes: 1 00:06:38.201 EAL: Detected shared linkage of DPDK 00:06:38.201 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:38.201 EAL: Selected IOVA mode 'PA' 00:06:38.201 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:38.201 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:38.201 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:38.460 Starting DPDK initialization... 00:06:38.460 Starting SPDK post initialization... 00:06:38.460 SPDK NVMe probe 00:06:38.460 Attaching to 0000:00:10.0 00:06:38.460 Attaching to 0000:00:11.0 00:06:38.460 Attached to 0000:00:10.0 00:06:38.460 Attached to 0000:00:11.0 00:06:38.460 Cleaning up... 00:06:38.460 ************************************ 00:06:38.460 END TEST env_dpdk_post_init 00:06:38.460 ************************************ 00:06:38.460 00:06:38.460 real 0m0.302s 00:06:38.460 user 0m0.097s 00:06:38.460 sys 0m0.102s 00:06:38.460 14:32:37 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.460 14:32:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:38.460 14:32:37 env -- env/env.sh@26 -- # uname 00:06:38.460 14:32:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:38.460 14:32:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:38.460 14:32:37 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:38.460 14:32:37 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.460 14:32:37 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.460 ************************************ 00:06:38.460 START TEST env_mem_callbacks 00:06:38.460 ************************************ 00:06:38.460 14:32:37 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:38.460 EAL: Detected CPU lcores: 10 00:06:38.460 EAL: Detected NUMA nodes: 1 00:06:38.460 EAL: Detected shared linkage of DPDK 00:06:38.460 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:38.460 EAL: Selected IOVA mode 'PA' 00:06:38.719 00:06:38.719 00:06:38.719 CUnit - A unit testing framework for C - Version 2.1-3 00:06:38.719 http://cunit.sourceforge.net/ 00:06:38.719 00:06:38.719 00:06:38.719 Suite: memory 00:06:38.719 Test: test ... 00:06:38.719 register 0x200000200000 2097152 00:06:38.719 malloc 3145728 00:06:38.719 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:38.719 register 0x200000400000 4194304 00:06:38.719 buf 0x2000004fffc0 len 3145728 PASSED 00:06:38.719 malloc 64 00:06:38.719 buf 0x2000004ffec0 len 64 PASSED 00:06:38.719 malloc 4194304 00:06:38.719 register 0x200000800000 6291456 00:06:38.719 buf 0x2000009fffc0 len 4194304 PASSED 00:06:38.719 free 0x2000004fffc0 3145728 00:06:38.719 free 0x2000004ffec0 64 00:06:38.719 unregister 0x200000400000 4194304 PASSED 00:06:38.719 free 0x2000009fffc0 4194304 00:06:38.719 unregister 0x200000800000 6291456 PASSED 00:06:38.719 malloc 8388608 00:06:38.719 register 0x200000400000 10485760 00:06:38.719 buf 0x2000005fffc0 len 8388608 PASSED 00:06:38.719 free 0x2000005fffc0 8388608 00:06:38.719 unregister 0x200000400000 10485760 PASSED 00:06:38.719 passed 00:06:38.719 00:06:38.719 Run Summary: Type Total Ran Passed Failed Inactive 00:06:38.719 suites 1 1 n/a 0 0 00:06:38.719 tests 1 1 1 0 0 00:06:38.719 asserts 15 15 15 0 n/a 00:06:38.719 00:06:38.719 Elapsed time = 0.062 seconds 00:06:38.719 00:06:38.719 real 0m0.283s 00:06:38.719 user 0m0.106s 00:06:38.719 sys 0m0.073s 00:06:38.719 14:32:37 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.719 14:32:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:38.719 ************************************ 00:06:38.719 END TEST env_mem_callbacks 00:06:38.719 ************************************ 00:06:38.719 ************************************ 00:06:38.719 END TEST env 00:06:38.719 ************************************ 00:06:38.719 00:06:38.719 real 0m9.588s 00:06:38.719 user 0m7.607s 00:06:38.719 sys 0m1.566s 00:06:38.720 14:32:37 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.720 14:32:37 env -- common/autotest_common.sh@10 -- # set +x 00:06:38.720 14:32:37 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:38.720 14:32:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:38.720 14:32:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.720 14:32:37 -- common/autotest_common.sh@10 -- # set +x 00:06:38.720 ************************************ 00:06:38.720 START TEST rpc 00:06:38.720 ************************************ 00:06:38.720 14:32:37 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:38.980 * Looking for test storage... 00:06:38.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:38.980 14:32:37 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.980 14:32:37 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.980 14:32:37 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.980 14:32:37 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.980 14:32:37 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.980 14:32:37 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.980 14:32:37 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.980 14:32:37 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.980 14:32:37 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.980 14:32:37 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.980 14:32:37 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.980 14:32:37 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:38.980 14:32:37 rpc -- scripts/common.sh@345 -- # : 1 00:06:38.980 14:32:37 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.980 14:32:37 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.980 14:32:37 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:38.980 14:32:37 rpc -- scripts/common.sh@353 -- # local d=1 00:06:38.980 14:32:37 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.980 14:32:37 rpc -- scripts/common.sh@355 -- # echo 1 00:06:38.980 14:32:37 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.980 14:32:37 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:38.980 14:32:37 rpc -- scripts/common.sh@353 -- # local d=2 00:06:38.980 14:32:37 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.980 14:32:37 rpc -- scripts/common.sh@355 -- # echo 2 00:06:38.980 14:32:37 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.980 14:32:37 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.980 14:32:37 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.980 14:32:37 rpc -- scripts/common.sh@368 -- # return 0 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:38.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.980 --rc genhtml_branch_coverage=1 00:06:38.980 --rc genhtml_function_coverage=1 00:06:38.980 --rc genhtml_legend=1 00:06:38.980 --rc geninfo_all_blocks=1 00:06:38.980 --rc geninfo_unexecuted_blocks=1 00:06:38.980 00:06:38.980 ' 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:38.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.980 --rc genhtml_branch_coverage=1 00:06:38.980 --rc genhtml_function_coverage=1 00:06:38.980 --rc genhtml_legend=1 00:06:38.980 --rc geninfo_all_blocks=1 00:06:38.980 --rc geninfo_unexecuted_blocks=1 00:06:38.980 00:06:38.980 ' 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:38.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.980 --rc genhtml_branch_coverage=1 00:06:38.980 --rc genhtml_function_coverage=1 00:06:38.980 --rc genhtml_legend=1 00:06:38.980 --rc geninfo_all_blocks=1 00:06:38.980 --rc geninfo_unexecuted_blocks=1 00:06:38.980 00:06:38.980 ' 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:38.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.980 --rc genhtml_branch_coverage=1 00:06:38.980 --rc genhtml_function_coverage=1 00:06:38.980 --rc genhtml_legend=1 00:06:38.980 --rc geninfo_all_blocks=1 00:06:38.980 --rc geninfo_unexecuted_blocks=1 00:06:38.980 00:06:38.980 ' 00:06:38.980 14:32:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56831 00:06:38.980 14:32:37 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:38.980 14:32:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.980 14:32:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56831 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@833 -- # '[' -z 56831 ']' 00:06:38.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:38.980 14:32:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.240 [2024-11-04 14:32:38.156777] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:06:39.240 [2024-11-04 14:32:38.156979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56831 ] 00:06:39.240 [2024-11-04 14:32:38.348637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.499 [2024-11-04 14:32:38.513622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:39.499 [2024-11-04 14:32:38.513723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56831' to capture a snapshot of events at runtime. 00:06:39.499 [2024-11-04 14:32:38.513745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.499 [2024-11-04 14:32:38.513777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.499 [2024-11-04 14:32:38.513792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56831 for offline analysis/debug. 00:06:39.499 [2024-11-04 14:32:38.515509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.436 14:32:39 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:40.436 14:32:39 rpc -- common/autotest_common.sh@866 -- # return 0 00:06:40.436 14:32:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:40.436 14:32:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:40.436 14:32:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:40.436 14:32:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:40.436 14:32:39 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:40.436 14:32:39 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.436 14:32:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.436 ************************************ 00:06:40.436 START TEST rpc_integrity 00:06:40.436 ************************************ 00:06:40.436 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:40.436 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:40.436 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.436 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.436 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.436 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:40.436 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:40.436 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:40.436 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:40.436 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.436 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.694 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.694 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:40.694 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:40.694 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.694 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.694 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.694 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:40.694 { 00:06:40.694 "name": "Malloc0", 00:06:40.694 "aliases": [ 00:06:40.694 "03c45849-f5fd-4f06-af55-34b8e5a60cf7" 00:06:40.694 ], 00:06:40.694 "product_name": "Malloc disk", 00:06:40.694 "block_size": 512, 00:06:40.694 "num_blocks": 16384, 00:06:40.694 "uuid": "03c45849-f5fd-4f06-af55-34b8e5a60cf7", 00:06:40.694 "assigned_rate_limits": { 00:06:40.694 "rw_ios_per_sec": 0, 00:06:40.694 "rw_mbytes_per_sec": 0, 00:06:40.694 "r_mbytes_per_sec": 0, 00:06:40.694 "w_mbytes_per_sec": 0 00:06:40.694 }, 00:06:40.694 "claimed": false, 00:06:40.694 "zoned": false, 00:06:40.694 "supported_io_types": { 00:06:40.694 "read": true, 00:06:40.694 "write": true, 00:06:40.694 "unmap": true, 00:06:40.694 "flush": true, 00:06:40.694 "reset": true, 00:06:40.694 "nvme_admin": false, 00:06:40.694 "nvme_io": false, 00:06:40.694 "nvme_io_md": false, 00:06:40.694 "write_zeroes": true, 00:06:40.694 "zcopy": true, 00:06:40.694 "get_zone_info": false, 00:06:40.694 "zone_management": false, 00:06:40.694 "zone_append": false, 00:06:40.694 "compare": false, 00:06:40.694 "compare_and_write": false, 00:06:40.694 "abort": true, 00:06:40.694 "seek_hole": false, 00:06:40.694 "seek_data": false, 00:06:40.694 "copy": true, 00:06:40.694 "nvme_iov_md": false 00:06:40.694 }, 00:06:40.694 "memory_domains": [ 00:06:40.694 { 00:06:40.694 "dma_device_id": "system", 00:06:40.694 "dma_device_type": 1 00:06:40.694 }, 00:06:40.694 { 00:06:40.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.694 "dma_device_type": 2 00:06:40.694 } 00:06:40.694 ], 00:06:40.694 "driver_specific": {} 00:06:40.694 } 00:06:40.694 ]' 00:06:40.694 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:40.694 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:40.694 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:40.694 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.694 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.694 [2024-11-04 14:32:39.636827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:40.694 [2024-11-04 14:32:39.637071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.694 [2024-11-04 14:32:39.637128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:40.694 [2024-11-04 14:32:39.637153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.694 [2024-11-04 14:32:39.640495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.694 [2024-11-04 14:32:39.640748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:40.694 Passthru0 00:06:40.694 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.694 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:40.694 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.694 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.694 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.694 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:40.694 { 00:06:40.694 "name": "Malloc0", 00:06:40.694 "aliases": [ 00:06:40.694 "03c45849-f5fd-4f06-af55-34b8e5a60cf7" 00:06:40.694 ], 00:06:40.694 "product_name": "Malloc disk", 00:06:40.694 "block_size": 512, 00:06:40.694 "num_blocks": 16384, 00:06:40.694 "uuid": "03c45849-f5fd-4f06-af55-34b8e5a60cf7", 00:06:40.694 "assigned_rate_limits": { 00:06:40.694 "rw_ios_per_sec": 0, 00:06:40.694 "rw_mbytes_per_sec": 0, 00:06:40.694 "r_mbytes_per_sec": 0, 00:06:40.694 "w_mbytes_per_sec": 0 00:06:40.694 }, 00:06:40.694 "claimed": true, 00:06:40.694 "claim_type": "exclusive_write", 00:06:40.694 "zoned": false, 00:06:40.694 "supported_io_types": { 00:06:40.694 "read": true, 00:06:40.694 "write": true, 00:06:40.694 "unmap": true, 00:06:40.694 "flush": true, 00:06:40.694 "reset": true, 00:06:40.694 "nvme_admin": false, 00:06:40.694 "nvme_io": false, 00:06:40.694 "nvme_io_md": false, 00:06:40.694 "write_zeroes": true, 00:06:40.694 "zcopy": true, 00:06:40.694 "get_zone_info": false, 00:06:40.694 "zone_management": false, 00:06:40.694 "zone_append": false, 00:06:40.694 "compare": false, 00:06:40.694 "compare_and_write": false, 00:06:40.694 "abort": true, 00:06:40.694 "seek_hole": false, 00:06:40.694 "seek_data": false, 00:06:40.694 "copy": true, 00:06:40.694 "nvme_iov_md": false 00:06:40.694 }, 00:06:40.694 "memory_domains": [ 00:06:40.694 { 00:06:40.694 "dma_device_id": "system", 00:06:40.694 "dma_device_type": 1 00:06:40.694 }, 00:06:40.694 { 00:06:40.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.694 "dma_device_type": 2 00:06:40.694 } 00:06:40.694 ], 00:06:40.694 "driver_specific": {} 00:06:40.694 }, 00:06:40.694 { 00:06:40.694 "name": "Passthru0", 00:06:40.694 "aliases": [ 00:06:40.694 "a67dfa42-a2e1-5034-97e1-a17a37991996" 00:06:40.694 ], 00:06:40.694 "product_name": "passthru", 00:06:40.694 "block_size": 512, 00:06:40.694 "num_blocks": 16384, 00:06:40.694 "uuid": "a67dfa42-a2e1-5034-97e1-a17a37991996", 00:06:40.694 "assigned_rate_limits": { 00:06:40.694 "rw_ios_per_sec": 0, 00:06:40.694 "rw_mbytes_per_sec": 0, 00:06:40.694 "r_mbytes_per_sec": 0, 00:06:40.694 "w_mbytes_per_sec": 0 00:06:40.694 }, 00:06:40.694 "claimed": false, 00:06:40.694 "zoned": false, 00:06:40.694 "supported_io_types": { 00:06:40.694 "read": true, 00:06:40.694 "write": true, 00:06:40.694 "unmap": true, 00:06:40.694 "flush": true, 00:06:40.694 "reset": true, 00:06:40.694 "nvme_admin": false, 00:06:40.694 "nvme_io": false, 00:06:40.694 "nvme_io_md": false, 00:06:40.694 "write_zeroes": true, 00:06:40.694 "zcopy": true, 00:06:40.694 "get_zone_info": false, 00:06:40.694 "zone_management": false, 00:06:40.694 "zone_append": false, 00:06:40.694 "compare": false, 00:06:40.694 "compare_and_write": false, 00:06:40.695 "abort": true, 00:06:40.695 "seek_hole": false, 00:06:40.695 "seek_data": false, 00:06:40.695 "copy": true, 00:06:40.695 "nvme_iov_md": false 00:06:40.695 }, 00:06:40.695 "memory_domains": [ 00:06:40.695 { 00:06:40.695 "dma_device_id": "system", 00:06:40.695 "dma_device_type": 1 00:06:40.695 }, 00:06:40.695 { 00:06:40.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.695 "dma_device_type": 2 00:06:40.695 } 00:06:40.695 ], 00:06:40.695 "driver_specific": { 00:06:40.695 "passthru": { 00:06:40.695 "name": "Passthru0", 00:06:40.695 "base_bdev_name": "Malloc0" 00:06:40.695 } 00:06:40.695 } 00:06:40.695 } 00:06:40.695 ]' 00:06:40.695 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:40.695 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:40.695 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:40.695 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.695 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.695 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.695 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:40.695 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.695 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.695 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.695 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:40.695 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.695 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.695 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.695 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:40.695 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:40.953 ************************************ 00:06:40.953 END TEST rpc_integrity 00:06:40.953 ************************************ 00:06:40.953 14:32:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:40.953 00:06:40.953 real 0m0.349s 00:06:40.953 user 0m0.206s 00:06:40.953 sys 0m0.042s 00:06:40.953 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.953 14:32:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:40.953 14:32:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:40.953 14:32:39 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:40.953 14:32:39 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.953 14:32:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.953 ************************************ 00:06:40.953 START TEST rpc_plugins 00:06:40.953 ************************************ 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:06:40.953 14:32:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.953 14:32:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:40.953 14:32:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.953 14:32:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:40.953 { 00:06:40.953 "name": "Malloc1", 00:06:40.953 "aliases": [ 00:06:40.953 "408abdad-3b46-40a3-9218-193ead3e7853" 00:06:40.953 ], 00:06:40.953 "product_name": "Malloc disk", 00:06:40.953 "block_size": 4096, 00:06:40.953 "num_blocks": 256, 00:06:40.953 "uuid": "408abdad-3b46-40a3-9218-193ead3e7853", 00:06:40.953 "assigned_rate_limits": { 00:06:40.953 "rw_ios_per_sec": 0, 00:06:40.953 "rw_mbytes_per_sec": 0, 00:06:40.953 "r_mbytes_per_sec": 0, 00:06:40.953 "w_mbytes_per_sec": 0 00:06:40.953 }, 00:06:40.953 "claimed": false, 00:06:40.953 "zoned": false, 00:06:40.953 "supported_io_types": { 00:06:40.953 "read": true, 00:06:40.953 "write": true, 00:06:40.953 "unmap": true, 00:06:40.953 "flush": true, 00:06:40.953 "reset": true, 00:06:40.953 "nvme_admin": false, 00:06:40.953 "nvme_io": false, 00:06:40.953 "nvme_io_md": false, 00:06:40.953 "write_zeroes": true, 00:06:40.953 "zcopy": true, 00:06:40.953 "get_zone_info": false, 00:06:40.953 "zone_management": false, 00:06:40.953 "zone_append": false, 00:06:40.953 "compare": false, 00:06:40.953 "compare_and_write": false, 00:06:40.953 "abort": true, 00:06:40.953 "seek_hole": false, 00:06:40.953 "seek_data": false, 00:06:40.953 "copy": true, 00:06:40.953 "nvme_iov_md": false 00:06:40.953 }, 00:06:40.953 "memory_domains": [ 00:06:40.953 { 00:06:40.953 "dma_device_id": "system", 00:06:40.953 "dma_device_type": 1 00:06:40.953 }, 00:06:40.953 { 00:06:40.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.953 "dma_device_type": 2 00:06:40.953 } 00:06:40.953 ], 00:06:40.953 "driver_specific": {} 00:06:40.953 } 00:06:40.953 ]' 00:06:40.953 14:32:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:40.953 14:32:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:40.953 14:32:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.953 14:32:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:40.953 14:32:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.953 14:32:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:40.953 14:32:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:40.953 ************************************ 00:06:40.953 END TEST rpc_plugins 00:06:40.953 ************************************ 00:06:40.953 14:32:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:40.953 00:06:40.953 real 0m0.170s 00:06:40.953 user 0m0.103s 00:06:40.953 sys 0m0.024s 00:06:40.953 14:32:40 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.953 14:32:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:41.213 14:32:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:41.213 14:32:40 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.213 14:32:40 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.213 14:32:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.213 ************************************ 00:06:41.213 START TEST rpc_trace_cmd_test 00:06:41.213 ************************************ 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:41.213 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56831", 00:06:41.213 "tpoint_group_mask": "0x8", 00:06:41.213 "iscsi_conn": { 00:06:41.213 "mask": "0x2", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "scsi": { 00:06:41.213 "mask": "0x4", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "bdev": { 00:06:41.213 "mask": "0x8", 00:06:41.213 "tpoint_mask": "0xffffffffffffffff" 00:06:41.213 }, 00:06:41.213 "nvmf_rdma": { 00:06:41.213 "mask": "0x10", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "nvmf_tcp": { 00:06:41.213 "mask": "0x20", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "ftl": { 00:06:41.213 "mask": "0x40", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "blobfs": { 00:06:41.213 "mask": "0x80", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "dsa": { 00:06:41.213 "mask": "0x200", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "thread": { 00:06:41.213 "mask": "0x400", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "nvme_pcie": { 00:06:41.213 "mask": "0x800", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "iaa": { 00:06:41.213 "mask": "0x1000", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "nvme_tcp": { 00:06:41.213 "mask": "0x2000", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "bdev_nvme": { 00:06:41.213 "mask": "0x4000", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "sock": { 00:06:41.213 "mask": "0x8000", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "blob": { 00:06:41.213 "mask": "0x10000", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "bdev_raid": { 00:06:41.213 "mask": "0x20000", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 }, 00:06:41.213 "scheduler": { 00:06:41.213 "mask": "0x40000", 00:06:41.213 "tpoint_mask": "0x0" 00:06:41.213 } 00:06:41.213 }' 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:41.213 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:41.472 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:41.472 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:41.472 ************************************ 00:06:41.472 END TEST rpc_trace_cmd_test 00:06:41.472 ************************************ 00:06:41.472 14:32:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:41.472 00:06:41.472 real 0m0.291s 00:06:41.472 user 0m0.256s 00:06:41.472 sys 0m0.024s 00:06:41.472 14:32:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.472 14:32:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.472 14:32:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:41.472 14:32:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:41.472 14:32:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:41.472 14:32:40 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.472 14:32:40 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.472 14:32:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.472 ************************************ 00:06:41.472 START TEST rpc_daemon_integrity 00:06:41.472 ************************************ 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:41.472 { 00:06:41.472 "name": "Malloc2", 00:06:41.472 "aliases": [ 00:06:41.472 "2444bd2a-cf14-47c2-b94e-2cd21e820a70" 00:06:41.472 ], 00:06:41.472 "product_name": "Malloc disk", 00:06:41.472 "block_size": 512, 00:06:41.472 "num_blocks": 16384, 00:06:41.472 "uuid": "2444bd2a-cf14-47c2-b94e-2cd21e820a70", 00:06:41.472 "assigned_rate_limits": { 00:06:41.472 "rw_ios_per_sec": 0, 00:06:41.472 "rw_mbytes_per_sec": 0, 00:06:41.472 "r_mbytes_per_sec": 0, 00:06:41.472 "w_mbytes_per_sec": 0 00:06:41.472 }, 00:06:41.472 "claimed": false, 00:06:41.472 "zoned": false, 00:06:41.472 "supported_io_types": { 00:06:41.472 "read": true, 00:06:41.472 "write": true, 00:06:41.472 "unmap": true, 00:06:41.472 "flush": true, 00:06:41.472 "reset": true, 00:06:41.472 "nvme_admin": false, 00:06:41.472 "nvme_io": false, 00:06:41.472 "nvme_io_md": false, 00:06:41.472 "write_zeroes": true, 00:06:41.472 "zcopy": true, 00:06:41.472 "get_zone_info": false, 00:06:41.472 "zone_management": false, 00:06:41.472 "zone_append": false, 00:06:41.472 "compare": false, 00:06:41.472 "compare_and_write": false, 00:06:41.472 "abort": true, 00:06:41.472 "seek_hole": false, 00:06:41.472 "seek_data": false, 00:06:41.472 "copy": true, 00:06:41.472 "nvme_iov_md": false 00:06:41.472 }, 00:06:41.472 "memory_domains": [ 00:06:41.472 { 00:06:41.472 "dma_device_id": "system", 00:06:41.472 "dma_device_type": 1 00:06:41.472 }, 00:06:41.472 { 00:06:41.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.472 "dma_device_type": 2 00:06:41.472 } 00:06:41.472 ], 00:06:41.472 "driver_specific": {} 00:06:41.472 } 00:06:41.472 ]' 00:06:41.472 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:41.732 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:41.732 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:41.732 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.732 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.732 [2024-11-04 14:32:40.608121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:41.732 [2024-11-04 14:32:40.608203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.732 [2024-11-04 14:32:40.608243] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:41.732 [2024-11-04 14:32:40.608272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.732 [2024-11-04 14:32:40.611770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.732 Passthru0 00:06:41.732 [2024-11-04 14:32:40.611968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:41.732 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.732 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:41.732 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.732 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.732 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.732 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:41.732 { 00:06:41.732 "name": "Malloc2", 00:06:41.732 "aliases": [ 00:06:41.732 "2444bd2a-cf14-47c2-b94e-2cd21e820a70" 00:06:41.732 ], 00:06:41.732 "product_name": "Malloc disk", 00:06:41.732 "block_size": 512, 00:06:41.732 "num_blocks": 16384, 00:06:41.732 "uuid": "2444bd2a-cf14-47c2-b94e-2cd21e820a70", 00:06:41.732 "assigned_rate_limits": { 00:06:41.732 "rw_ios_per_sec": 0, 00:06:41.732 "rw_mbytes_per_sec": 0, 00:06:41.732 "r_mbytes_per_sec": 0, 00:06:41.732 "w_mbytes_per_sec": 0 00:06:41.732 }, 00:06:41.732 "claimed": true, 00:06:41.732 "claim_type": "exclusive_write", 00:06:41.732 "zoned": false, 00:06:41.732 "supported_io_types": { 00:06:41.732 "read": true, 00:06:41.732 "write": true, 00:06:41.732 "unmap": true, 00:06:41.732 "flush": true, 00:06:41.732 "reset": true, 00:06:41.732 "nvme_admin": false, 00:06:41.732 "nvme_io": false, 00:06:41.732 "nvme_io_md": false, 00:06:41.732 "write_zeroes": true, 00:06:41.732 "zcopy": true, 00:06:41.732 "get_zone_info": false, 00:06:41.732 "zone_management": false, 00:06:41.732 "zone_append": false, 00:06:41.732 "compare": false, 00:06:41.732 "compare_and_write": false, 00:06:41.732 "abort": true, 00:06:41.732 "seek_hole": false, 00:06:41.732 "seek_data": false, 00:06:41.732 "copy": true, 00:06:41.732 "nvme_iov_md": false 00:06:41.732 }, 00:06:41.732 "memory_domains": [ 00:06:41.732 { 00:06:41.732 "dma_device_id": "system", 00:06:41.732 "dma_device_type": 1 00:06:41.732 }, 00:06:41.732 { 00:06:41.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.732 "dma_device_type": 2 00:06:41.732 } 00:06:41.732 ], 00:06:41.732 "driver_specific": {} 00:06:41.732 }, 00:06:41.732 { 00:06:41.732 "name": "Passthru0", 00:06:41.732 "aliases": [ 00:06:41.732 "b64ade32-390a-5b85-81f5-41fe1649f431" 00:06:41.732 ], 00:06:41.732 "product_name": "passthru", 00:06:41.732 "block_size": 512, 00:06:41.732 "num_blocks": 16384, 00:06:41.732 "uuid": "b64ade32-390a-5b85-81f5-41fe1649f431", 00:06:41.732 "assigned_rate_limits": { 00:06:41.732 "rw_ios_per_sec": 0, 00:06:41.732 "rw_mbytes_per_sec": 0, 00:06:41.732 "r_mbytes_per_sec": 0, 00:06:41.732 "w_mbytes_per_sec": 0 00:06:41.732 }, 00:06:41.732 "claimed": false, 00:06:41.732 "zoned": false, 00:06:41.732 "supported_io_types": { 00:06:41.732 "read": true, 00:06:41.732 "write": true, 00:06:41.732 "unmap": true, 00:06:41.732 "flush": true, 00:06:41.732 "reset": true, 00:06:41.732 "nvme_admin": false, 00:06:41.732 "nvme_io": false, 00:06:41.732 "nvme_io_md": false, 00:06:41.732 "write_zeroes": true, 00:06:41.732 "zcopy": true, 00:06:41.732 "get_zone_info": false, 00:06:41.733 "zone_management": false, 00:06:41.733 "zone_append": false, 00:06:41.733 "compare": false, 00:06:41.733 "compare_and_write": false, 00:06:41.733 "abort": true, 00:06:41.733 "seek_hole": false, 00:06:41.733 "seek_data": false, 00:06:41.733 "copy": true, 00:06:41.733 "nvme_iov_md": false 00:06:41.733 }, 00:06:41.733 "memory_domains": [ 00:06:41.733 { 00:06:41.733 "dma_device_id": "system", 00:06:41.733 "dma_device_type": 1 00:06:41.733 }, 00:06:41.733 { 00:06:41.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.733 "dma_device_type": 2 00:06:41.733 } 00:06:41.733 ], 00:06:41.733 "driver_specific": { 00:06:41.733 "passthru": { 00:06:41.733 "name": "Passthru0", 00:06:41.733 "base_bdev_name": "Malloc2" 00:06:41.733 } 00:06:41.733 } 00:06:41.733 } 00:06:41.733 ]' 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:41.733 ************************************ 00:06:41.733 END TEST rpc_daemon_integrity 00:06:41.733 ************************************ 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:41.733 00:06:41.733 real 0m0.362s 00:06:41.733 user 0m0.212s 00:06:41.733 sys 0m0.051s 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.733 14:32:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:41.733 14:32:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:41.733 14:32:40 rpc -- rpc/rpc.sh@84 -- # killprocess 56831 00:06:41.733 14:32:40 rpc -- common/autotest_common.sh@952 -- # '[' -z 56831 ']' 00:06:41.733 14:32:40 rpc -- common/autotest_common.sh@956 -- # kill -0 56831 00:06:41.733 14:32:40 rpc -- common/autotest_common.sh@957 -- # uname 00:06:41.992 14:32:40 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:41.992 14:32:40 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56831 00:06:41.992 killing process with pid 56831 00:06:41.992 14:32:40 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:41.992 14:32:40 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:41.992 14:32:40 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56831' 00:06:41.992 14:32:40 rpc -- common/autotest_common.sh@971 -- # kill 56831 00:06:41.992 14:32:40 rpc -- common/autotest_common.sh@976 -- # wait 56831 00:06:44.524 00:06:44.524 real 0m5.386s 00:06:44.524 user 0m6.069s 00:06:44.524 sys 0m0.954s 00:06:44.524 ************************************ 00:06:44.524 END TEST rpc 00:06:44.524 ************************************ 00:06:44.524 14:32:43 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.525 14:32:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.525 14:32:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:44.525 14:32:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.525 14:32:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.525 14:32:43 -- common/autotest_common.sh@10 -- # set +x 00:06:44.525 ************************************ 00:06:44.525 START TEST skip_rpc 00:06:44.525 ************************************ 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:44.525 * Looking for test storage... 00:06:44.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.525 14:32:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.525 --rc genhtml_branch_coverage=1 00:06:44.525 --rc genhtml_function_coverage=1 00:06:44.525 --rc genhtml_legend=1 00:06:44.525 --rc geninfo_all_blocks=1 00:06:44.525 --rc geninfo_unexecuted_blocks=1 00:06:44.525 00:06:44.525 ' 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.525 --rc genhtml_branch_coverage=1 00:06:44.525 --rc genhtml_function_coverage=1 00:06:44.525 --rc genhtml_legend=1 00:06:44.525 --rc geninfo_all_blocks=1 00:06:44.525 --rc geninfo_unexecuted_blocks=1 00:06:44.525 00:06:44.525 ' 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.525 --rc genhtml_branch_coverage=1 00:06:44.525 --rc genhtml_function_coverage=1 00:06:44.525 --rc genhtml_legend=1 00:06:44.525 --rc geninfo_all_blocks=1 00:06:44.525 --rc geninfo_unexecuted_blocks=1 00:06:44.525 00:06:44.525 ' 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.525 --rc genhtml_branch_coverage=1 00:06:44.525 --rc genhtml_function_coverage=1 00:06:44.525 --rc genhtml_legend=1 00:06:44.525 --rc geninfo_all_blocks=1 00:06:44.525 --rc geninfo_unexecuted_blocks=1 00:06:44.525 00:06:44.525 ' 00:06:44.525 14:32:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:44.525 14:32:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:44.525 14:32:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.525 14:32:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.525 ************************************ 00:06:44.525 START TEST skip_rpc 00:06:44.525 ************************************ 00:06:44.525 14:32:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:06:44.525 14:32:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57060 00:06:44.525 14:32:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:44.525 14:32:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.525 14:32:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:44.525 [2024-11-04 14:32:43.573374] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:06:44.525 [2024-11-04 14:32:43.574575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57060 ] 00:06:44.784 [2024-11-04 14:32:43.771593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.043 [2024-11-04 14:32:43.933622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57060 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57060 ']' 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57060 00:06:49.315 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:49.575 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.575 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57060 00:06:49.575 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.575 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.575 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57060' 00:06:49.575 killing process with pid 57060 00:06:49.575 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57060 00:06:49.575 14:32:48 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57060 00:06:52.133 00:06:52.133 real 0m7.291s 00:06:52.133 user 0m6.694s 00:06:52.133 sys 0m0.485s 00:06:52.133 ************************************ 00:06:52.133 END TEST skip_rpc 00:06:52.133 ************************************ 00:06:52.133 14:32:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.133 14:32:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.133 14:32:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:52.133 14:32:50 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.133 14:32:50 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.133 14:32:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.133 ************************************ 00:06:52.133 START TEST skip_rpc_with_json 00:06:52.133 ************************************ 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57164 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57164 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57164 ']' 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.133 14:32:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:52.133 [2024-11-04 14:32:50.891895] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:06:52.133 [2024-11-04 14:32:50.892116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57164 ] 00:06:52.133 [2024-11-04 14:32:51.077700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.133 [2024-11-04 14:32:51.208589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.069 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.069 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:53.069 14:32:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:53.069 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.069 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.069 [2024-11-04 14:32:52.091615] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:53.069 request: 00:06:53.069 { 00:06:53.069 "trtype": "tcp", 00:06:53.069 "method": "nvmf_get_transports", 00:06:53.069 "req_id": 1 00:06:53.069 } 00:06:53.069 Got JSON-RPC error response 00:06:53.069 response: 00:06:53.069 { 00:06:53.069 "code": -19, 00:06:53.069 "message": "No such device" 00:06:53.069 } 00:06:53.069 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:53.069 14:32:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:53.069 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.069 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.069 [2024-11-04 14:32:52.103748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.069 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.070 14:32:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:53.070 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.070 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:53.328 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.328 14:32:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:53.328 { 00:06:53.328 "subsystems": [ 00:06:53.328 { 00:06:53.328 "subsystem": "fsdev", 00:06:53.328 "config": [ 00:06:53.328 { 00:06:53.328 "method": "fsdev_set_opts", 00:06:53.329 "params": { 00:06:53.329 "fsdev_io_pool_size": 65535, 00:06:53.329 "fsdev_io_cache_size": 256 00:06:53.329 } 00:06:53.329 } 00:06:53.329 ] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "keyring", 00:06:53.329 "config": [] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "iobuf", 00:06:53.329 "config": [ 00:06:53.329 { 00:06:53.329 "method": "iobuf_set_options", 00:06:53.329 "params": { 00:06:53.329 "small_pool_count": 8192, 00:06:53.329 "large_pool_count": 1024, 00:06:53.329 "small_bufsize": 8192, 00:06:53.329 "large_bufsize": 135168, 00:06:53.329 "enable_numa": false 00:06:53.329 } 00:06:53.329 } 00:06:53.329 ] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "sock", 00:06:53.329 "config": [ 00:06:53.329 { 00:06:53.329 "method": "sock_set_default_impl", 00:06:53.329 "params": { 00:06:53.329 "impl_name": "posix" 00:06:53.329 } 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "method": "sock_impl_set_options", 00:06:53.329 "params": { 00:06:53.329 "impl_name": "ssl", 00:06:53.329 "recv_buf_size": 4096, 00:06:53.329 "send_buf_size": 4096, 00:06:53.329 "enable_recv_pipe": true, 00:06:53.329 "enable_quickack": false, 00:06:53.329 "enable_placement_id": 0, 00:06:53.329 "enable_zerocopy_send_server": true, 00:06:53.329 "enable_zerocopy_send_client": false, 00:06:53.329 "zerocopy_threshold": 0, 00:06:53.329 "tls_version": 0, 00:06:53.329 "enable_ktls": false 00:06:53.329 } 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "method": "sock_impl_set_options", 00:06:53.329 "params": { 00:06:53.329 "impl_name": "posix", 00:06:53.329 "recv_buf_size": 2097152, 00:06:53.329 "send_buf_size": 2097152, 00:06:53.329 "enable_recv_pipe": true, 00:06:53.329 "enable_quickack": false, 00:06:53.329 "enable_placement_id": 0, 00:06:53.329 "enable_zerocopy_send_server": true, 00:06:53.329 "enable_zerocopy_send_client": false, 00:06:53.329 "zerocopy_threshold": 0, 00:06:53.329 "tls_version": 0, 00:06:53.329 "enable_ktls": false 00:06:53.329 } 00:06:53.329 } 00:06:53.329 ] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "vmd", 00:06:53.329 "config": [] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "accel", 00:06:53.329 "config": [ 00:06:53.329 { 00:06:53.329 "method": "accel_set_options", 00:06:53.329 "params": { 00:06:53.329 "small_cache_size": 128, 00:06:53.329 "large_cache_size": 16, 00:06:53.329 "task_count": 2048, 00:06:53.329 "sequence_count": 2048, 00:06:53.329 "buf_count": 2048 00:06:53.329 } 00:06:53.329 } 00:06:53.329 ] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "bdev", 00:06:53.329 "config": [ 00:06:53.329 { 00:06:53.329 "method": "bdev_set_options", 00:06:53.329 "params": { 00:06:53.329 "bdev_io_pool_size": 65535, 00:06:53.329 "bdev_io_cache_size": 256, 00:06:53.329 "bdev_auto_examine": true, 00:06:53.329 "iobuf_small_cache_size": 128, 00:06:53.329 "iobuf_large_cache_size": 16 00:06:53.329 } 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "method": "bdev_raid_set_options", 00:06:53.329 "params": { 00:06:53.329 "process_window_size_kb": 1024, 00:06:53.329 "process_max_bandwidth_mb_sec": 0 00:06:53.329 } 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "method": "bdev_iscsi_set_options", 00:06:53.329 "params": { 00:06:53.329 "timeout_sec": 30 00:06:53.329 } 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "method": "bdev_nvme_set_options", 00:06:53.329 "params": { 00:06:53.329 "action_on_timeout": "none", 00:06:53.329 "timeout_us": 0, 00:06:53.329 "timeout_admin_us": 0, 00:06:53.329 "keep_alive_timeout_ms": 10000, 00:06:53.329 "arbitration_burst": 0, 00:06:53.329 "low_priority_weight": 0, 00:06:53.329 "medium_priority_weight": 0, 00:06:53.329 "high_priority_weight": 0, 00:06:53.329 "nvme_adminq_poll_period_us": 10000, 00:06:53.329 "nvme_ioq_poll_period_us": 0, 00:06:53.329 "io_queue_requests": 0, 00:06:53.329 "delay_cmd_submit": true, 00:06:53.329 "transport_retry_count": 4, 00:06:53.329 "bdev_retry_count": 3, 00:06:53.329 "transport_ack_timeout": 0, 00:06:53.329 "ctrlr_loss_timeout_sec": 0, 00:06:53.329 "reconnect_delay_sec": 0, 00:06:53.329 "fast_io_fail_timeout_sec": 0, 00:06:53.329 "disable_auto_failback": false, 00:06:53.329 "generate_uuids": false, 00:06:53.329 "transport_tos": 0, 00:06:53.329 "nvme_error_stat": false, 00:06:53.329 "rdma_srq_size": 0, 00:06:53.329 "io_path_stat": false, 00:06:53.329 "allow_accel_sequence": false, 00:06:53.329 "rdma_max_cq_size": 0, 00:06:53.329 "rdma_cm_event_timeout_ms": 0, 00:06:53.329 "dhchap_digests": [ 00:06:53.329 "sha256", 00:06:53.329 "sha384", 00:06:53.329 "sha512" 00:06:53.329 ], 00:06:53.329 "dhchap_dhgroups": [ 00:06:53.329 "null", 00:06:53.329 "ffdhe2048", 00:06:53.329 "ffdhe3072", 00:06:53.329 "ffdhe4096", 00:06:53.329 "ffdhe6144", 00:06:53.329 "ffdhe8192" 00:06:53.329 ] 00:06:53.329 } 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "method": "bdev_nvme_set_hotplug", 00:06:53.329 "params": { 00:06:53.329 "period_us": 100000, 00:06:53.329 "enable": false 00:06:53.329 } 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "method": "bdev_wait_for_examine" 00:06:53.329 } 00:06:53.329 ] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "scsi", 00:06:53.329 "config": null 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "scheduler", 00:06:53.329 "config": [ 00:06:53.329 { 00:06:53.329 "method": "framework_set_scheduler", 00:06:53.329 "params": { 00:06:53.329 "name": "static" 00:06:53.329 } 00:06:53.329 } 00:06:53.329 ] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "vhost_scsi", 00:06:53.329 "config": [] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "vhost_blk", 00:06:53.329 "config": [] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "ublk", 00:06:53.329 "config": [] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "nbd", 00:06:53.329 "config": [] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "nvmf", 00:06:53.329 "config": [ 00:06:53.329 { 00:06:53.329 "method": "nvmf_set_config", 00:06:53.329 "params": { 00:06:53.329 "discovery_filter": "match_any", 00:06:53.329 "admin_cmd_passthru": { 00:06:53.329 "identify_ctrlr": false 00:06:53.329 }, 00:06:53.329 "dhchap_digests": [ 00:06:53.329 "sha256", 00:06:53.329 "sha384", 00:06:53.329 "sha512" 00:06:53.329 ], 00:06:53.329 "dhchap_dhgroups": [ 00:06:53.329 "null", 00:06:53.329 "ffdhe2048", 00:06:53.329 "ffdhe3072", 00:06:53.329 "ffdhe4096", 00:06:53.329 "ffdhe6144", 00:06:53.329 "ffdhe8192" 00:06:53.329 ] 00:06:53.329 } 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "method": "nvmf_set_max_subsystems", 00:06:53.329 "params": { 00:06:53.329 "max_subsystems": 1024 00:06:53.329 } 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "method": "nvmf_set_crdt", 00:06:53.329 "params": { 00:06:53.329 "crdt1": 0, 00:06:53.329 "crdt2": 0, 00:06:53.329 "crdt3": 0 00:06:53.329 } 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "method": "nvmf_create_transport", 00:06:53.329 "params": { 00:06:53.329 "trtype": "TCP", 00:06:53.329 "max_queue_depth": 128, 00:06:53.329 "max_io_qpairs_per_ctrlr": 127, 00:06:53.329 "in_capsule_data_size": 4096, 00:06:53.329 "max_io_size": 131072, 00:06:53.329 "io_unit_size": 131072, 00:06:53.329 "max_aq_depth": 128, 00:06:53.329 "num_shared_buffers": 511, 00:06:53.329 "buf_cache_size": 4294967295, 00:06:53.329 "dif_insert_or_strip": false, 00:06:53.329 "zcopy": false, 00:06:53.329 "c2h_success": true, 00:06:53.329 "sock_priority": 0, 00:06:53.329 "abort_timeout_sec": 1, 00:06:53.329 "ack_timeout": 0, 00:06:53.329 "data_wr_pool_size": 0 00:06:53.329 } 00:06:53.329 } 00:06:53.329 ] 00:06:53.329 }, 00:06:53.329 { 00:06:53.329 "subsystem": "iscsi", 00:06:53.329 "config": [ 00:06:53.329 { 00:06:53.329 "method": "iscsi_set_options", 00:06:53.329 "params": { 00:06:53.329 "node_base": "iqn.2016-06.io.spdk", 00:06:53.329 "max_sessions": 128, 00:06:53.329 "max_connections_per_session": 2, 00:06:53.329 "max_queue_depth": 64, 00:06:53.330 "default_time2wait": 2, 00:06:53.330 "default_time2retain": 20, 00:06:53.330 "first_burst_length": 8192, 00:06:53.330 "immediate_data": true, 00:06:53.330 "allow_duplicated_isid": false, 00:06:53.330 "error_recovery_level": 0, 00:06:53.330 "nop_timeout": 60, 00:06:53.330 "nop_in_interval": 30, 00:06:53.330 "disable_chap": false, 00:06:53.330 "require_chap": false, 00:06:53.330 "mutual_chap": false, 00:06:53.330 "chap_group": 0, 00:06:53.330 "max_large_datain_per_connection": 64, 00:06:53.330 "max_r2t_per_connection": 4, 00:06:53.330 "pdu_pool_size": 36864, 00:06:53.330 "immediate_data_pool_size": 16384, 00:06:53.330 "data_out_pool_size": 2048 00:06:53.330 } 00:06:53.330 } 00:06:53.330 ] 00:06:53.330 } 00:06:53.330 ] 00:06:53.330 } 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57164 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57164 ']' 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57164 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57164 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:53.330 killing process with pid 57164 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57164' 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57164 00:06:53.330 14:32:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57164 00:06:55.861 14:32:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57220 00:06:55.861 14:32:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:55.861 14:32:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57220 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57220 ']' 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57220 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57220 00:07:01.129 killing process with pid 57220 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57220' 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57220 00:07:01.129 14:32:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57220 00:07:03.035 14:33:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:03.035 14:33:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:03.035 00:07:03.035 real 0m11.030s 00:07:03.035 user 0m10.425s 00:07:03.035 sys 0m1.014s 00:07:03.035 14:33:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.035 ************************************ 00:07:03.035 14:33:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:03.035 END TEST skip_rpc_with_json 00:07:03.035 ************************************ 00:07:03.035 14:33:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:03.035 14:33:01 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.035 14:33:01 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.035 14:33:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.035 ************************************ 00:07:03.035 START TEST skip_rpc_with_delay 00:07:03.035 ************************************ 00:07:03.035 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:07:03.035 14:33:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.035 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:03.035 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.036 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.036 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.036 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.036 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.036 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.036 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.036 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:03.036 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:03.036 14:33:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.036 [2024-11-04 14:33:01.957243] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:03.036 14:33:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:03.036 14:33:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.036 14:33:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.036 14:33:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.036 00:07:03.036 real 0m0.172s 00:07:03.036 user 0m0.093s 00:07:03.036 sys 0m0.078s 00:07:03.036 14:33:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.036 ************************************ 00:07:03.036 END TEST skip_rpc_with_delay 00:07:03.036 ************************************ 00:07:03.036 14:33:02 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:03.036 14:33:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:03.036 14:33:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:03.036 14:33:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:03.036 14:33:02 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:03.036 14:33:02 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:03.036 14:33:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.036 ************************************ 00:07:03.036 START TEST exit_on_failed_rpc_init 00:07:03.036 ************************************ 00:07:03.036 14:33:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:07:03.036 14:33:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57348 00:07:03.036 14:33:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57348 00:07:03.036 14:33:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57348 ']' 00:07:03.036 14:33:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.036 14:33:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:03.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.036 14:33:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.036 14:33:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:03.036 14:33:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:03.036 14:33:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.295 [2024-11-04 14:33:02.200907] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:03.295 [2024-11-04 14:33:02.201123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57348 ] 00:07:03.295 [2024-11-04 14:33:02.390112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.554 [2024-11-04 14:33:02.520057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:04.490 14:33:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:04.490 [2024-11-04 14:33:03.501700] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:04.490 [2024-11-04 14:33:03.501870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57371 ] 00:07:04.749 [2024-11-04 14:33:03.684016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.749 [2024-11-04 14:33:03.843293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.749 [2024-11-04 14:33:03.843434] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:04.749 [2024-11-04 14:33:03.843461] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:04.749 [2024-11-04 14:33:03.843494] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57348 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57348 ']' 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57348 00:07:05.008 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:07:05.267 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:05.267 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57348 00:07:05.267 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:05.267 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:05.267 killing process with pid 57348 00:07:05.267 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57348' 00:07:05.267 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57348 00:07:05.267 14:33:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57348 00:07:07.823 00:07:07.823 real 0m4.331s 00:07:07.823 user 0m4.825s 00:07:07.823 sys 0m0.670s 00:07:07.823 14:33:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.823 14:33:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:07.823 ************************************ 00:07:07.823 END TEST exit_on_failed_rpc_init 00:07:07.823 ************************************ 00:07:07.823 14:33:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:07.823 ************************************ 00:07:07.823 END TEST skip_rpc 00:07:07.823 ************************************ 00:07:07.823 00:07:07.823 real 0m23.223s 00:07:07.823 user 0m22.221s 00:07:07.823 sys 0m2.452s 00:07:07.823 14:33:06 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.823 14:33:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.823 14:33:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:07.823 14:33:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:07.823 14:33:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.823 14:33:06 -- common/autotest_common.sh@10 -- # set +x 00:07:07.823 ************************************ 00:07:07.823 START TEST rpc_client 00:07:07.823 ************************************ 00:07:07.823 14:33:06 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:07.823 * Looking for test storage... 00:07:07.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:07.823 14:33:06 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:07.823 14:33:06 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:07.823 14:33:06 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:07:07.823 14:33:06 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:07.823 14:33:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.824 14:33:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:07.824 14:33:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.824 14:33:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:07.824 14:33:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:07.824 14:33:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.824 14:33:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:07.824 14:33:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.824 14:33:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.824 14:33:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.824 14:33:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:07.824 14:33:06 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.824 14:33:06 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:07.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.824 --rc genhtml_branch_coverage=1 00:07:07.824 --rc genhtml_function_coverage=1 00:07:07.824 --rc genhtml_legend=1 00:07:07.824 --rc geninfo_all_blocks=1 00:07:07.824 --rc geninfo_unexecuted_blocks=1 00:07:07.824 00:07:07.824 ' 00:07:07.824 14:33:06 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:07.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.824 --rc genhtml_branch_coverage=1 00:07:07.824 --rc genhtml_function_coverage=1 00:07:07.824 --rc genhtml_legend=1 00:07:07.824 --rc geninfo_all_blocks=1 00:07:07.824 --rc geninfo_unexecuted_blocks=1 00:07:07.824 00:07:07.824 ' 00:07:07.824 14:33:06 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:07.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.824 --rc genhtml_branch_coverage=1 00:07:07.824 --rc genhtml_function_coverage=1 00:07:07.824 --rc genhtml_legend=1 00:07:07.824 --rc geninfo_all_blocks=1 00:07:07.824 --rc geninfo_unexecuted_blocks=1 00:07:07.824 00:07:07.824 ' 00:07:07.824 14:33:06 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:07.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.824 --rc genhtml_branch_coverage=1 00:07:07.824 --rc genhtml_function_coverage=1 00:07:07.824 --rc genhtml_legend=1 00:07:07.824 --rc geninfo_all_blocks=1 00:07:07.824 --rc geninfo_unexecuted_blocks=1 00:07:07.824 00:07:07.824 ' 00:07:07.824 14:33:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:07.824 OK 00:07:07.824 14:33:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:07.824 00:07:07.824 real 0m0.301s 00:07:07.824 user 0m0.196s 00:07:07.824 sys 0m0.114s 00:07:07.824 14:33:06 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.824 14:33:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:07.824 ************************************ 00:07:07.824 END TEST rpc_client 00:07:07.824 ************************************ 00:07:07.824 14:33:06 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:07.824 14:33:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:07.824 14:33:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.824 14:33:06 -- common/autotest_common.sh@10 -- # set +x 00:07:07.824 ************************************ 00:07:07.824 START TEST json_config 00:07:07.824 ************************************ 00:07:07.824 14:33:06 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:07.824 14:33:06 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:07.824 14:33:06 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:07:07.824 14:33:06 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:08.083 14:33:07 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:08.083 14:33:07 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.083 14:33:07 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.083 14:33:07 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.083 14:33:07 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.083 14:33:07 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.083 14:33:07 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.083 14:33:07 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.083 14:33:07 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.083 14:33:07 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.083 14:33:07 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.083 14:33:07 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.083 14:33:07 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:08.083 14:33:07 json_config -- scripts/common.sh@345 -- # : 1 00:07:08.083 14:33:07 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.083 14:33:07 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.083 14:33:07 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:08.083 14:33:07 json_config -- scripts/common.sh@353 -- # local d=1 00:07:08.083 14:33:07 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.083 14:33:07 json_config -- scripts/common.sh@355 -- # echo 1 00:07:08.083 14:33:07 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.083 14:33:07 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:08.083 14:33:07 json_config -- scripts/common.sh@353 -- # local d=2 00:07:08.083 14:33:07 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.083 14:33:07 json_config -- scripts/common.sh@355 -- # echo 2 00:07:08.083 14:33:07 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.083 14:33:07 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.083 14:33:07 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.083 14:33:07 json_config -- scripts/common.sh@368 -- # return 0 00:07:08.083 14:33:07 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.083 14:33:07 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:08.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.083 --rc genhtml_branch_coverage=1 00:07:08.083 --rc genhtml_function_coverage=1 00:07:08.083 --rc genhtml_legend=1 00:07:08.083 --rc geninfo_all_blocks=1 00:07:08.083 --rc geninfo_unexecuted_blocks=1 00:07:08.083 00:07:08.083 ' 00:07:08.083 14:33:07 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:08.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.083 --rc genhtml_branch_coverage=1 00:07:08.083 --rc genhtml_function_coverage=1 00:07:08.083 --rc genhtml_legend=1 00:07:08.083 --rc geninfo_all_blocks=1 00:07:08.083 --rc geninfo_unexecuted_blocks=1 00:07:08.083 00:07:08.083 ' 00:07:08.083 14:33:07 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:08.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.083 --rc genhtml_branch_coverage=1 00:07:08.083 --rc genhtml_function_coverage=1 00:07:08.083 --rc genhtml_legend=1 00:07:08.083 --rc geninfo_all_blocks=1 00:07:08.083 --rc geninfo_unexecuted_blocks=1 00:07:08.083 00:07:08.083 ' 00:07:08.083 14:33:07 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:08.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.083 --rc genhtml_branch_coverage=1 00:07:08.083 --rc genhtml_function_coverage=1 00:07:08.083 --rc genhtml_legend=1 00:07:08.083 --rc geninfo_all_blocks=1 00:07:08.083 --rc geninfo_unexecuted_blocks=1 00:07:08.083 00:07:08.083 ' 00:07:08.083 14:33:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df2914e2-f71b-4480-87e8-79977859965f 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=df2914e2-f71b-4480-87e8-79977859965f 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.083 14:33:07 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.083 14:33:07 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.083 14:33:07 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.083 14:33:07 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.083 14:33:07 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.083 14:33:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.084 14:33:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.084 14:33:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.084 14:33:07 json_config -- paths/export.sh@5 -- # export PATH 00:07:08.084 14:33:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.084 14:33:07 json_config -- nvmf/common.sh@51 -- # : 0 00:07:08.084 14:33:07 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.084 14:33:07 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.084 14:33:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.084 14:33:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.084 14:33:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.084 14:33:07 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.084 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.084 14:33:07 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.084 14:33:07 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.084 14:33:07 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.084 14:33:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:08.084 14:33:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:08.084 14:33:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:08.084 14:33:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:08.084 14:33:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:08.084 14:33:07 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:07:08.084 WARNING: No tests are enabled so not running JSON configuration tests 00:07:08.084 14:33:07 json_config -- json_config/json_config.sh@28 -- # exit 0 00:07:08.084 00:07:08.084 real 0m0.209s 00:07:08.084 user 0m0.149s 00:07:08.084 sys 0m0.066s 00:07:08.084 14:33:07 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.084 14:33:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.084 ************************************ 00:07:08.084 END TEST json_config 00:07:08.084 ************************************ 00:07:08.084 14:33:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:08.084 14:33:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:08.084 14:33:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.084 14:33:07 -- common/autotest_common.sh@10 -- # set +x 00:07:08.084 ************************************ 00:07:08.084 START TEST json_config_extra_key 00:07:08.084 ************************************ 00:07:08.084 14:33:07 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:08.084 14:33:07 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:08.084 14:33:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:07:08.084 14:33:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:08.343 14:33:07 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.343 14:33:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:08.343 14:33:07 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.344 14:33:07 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:08.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.344 --rc genhtml_branch_coverage=1 00:07:08.344 --rc genhtml_function_coverage=1 00:07:08.344 --rc genhtml_legend=1 00:07:08.344 --rc geninfo_all_blocks=1 00:07:08.344 --rc geninfo_unexecuted_blocks=1 00:07:08.344 00:07:08.344 ' 00:07:08.344 14:33:07 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:08.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.344 --rc genhtml_branch_coverage=1 00:07:08.344 --rc genhtml_function_coverage=1 00:07:08.344 --rc genhtml_legend=1 00:07:08.344 --rc geninfo_all_blocks=1 00:07:08.344 --rc geninfo_unexecuted_blocks=1 00:07:08.344 00:07:08.344 ' 00:07:08.344 14:33:07 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:08.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.344 --rc genhtml_branch_coverage=1 00:07:08.344 --rc genhtml_function_coverage=1 00:07:08.344 --rc genhtml_legend=1 00:07:08.344 --rc geninfo_all_blocks=1 00:07:08.344 --rc geninfo_unexecuted_blocks=1 00:07:08.344 00:07:08.344 ' 00:07:08.344 14:33:07 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:08.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.344 --rc genhtml_branch_coverage=1 00:07:08.344 --rc genhtml_function_coverage=1 00:07:08.344 --rc genhtml_legend=1 00:07:08.344 --rc geninfo_all_blocks=1 00:07:08.344 --rc geninfo_unexecuted_blocks=1 00:07:08.344 00:07:08.344 ' 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:df2914e2-f71b-4480-87e8-79977859965f 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=df2914e2-f71b-4480-87e8-79977859965f 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.344 14:33:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.344 14:33:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.344 14:33:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.344 14:33:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.344 14:33:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.344 14:33:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.344 14:33:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.344 14:33:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:08.344 14:33:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:08.344 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:08.344 14:33:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:08.344 INFO: launching applications... 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:08.344 14:33:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:08.344 14:33:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:08.344 14:33:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:08.344 14:33:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:08.344 14:33:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:08.344 14:33:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:08.344 14:33:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.344 14:33:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.344 14:33:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57576 00:07:08.344 Waiting for target to run... 00:07:08.345 14:33:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:08.345 14:33:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57576 /var/tmp/spdk_tgt.sock 00:07:08.345 14:33:07 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57576 ']' 00:07:08.345 14:33:07 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:08.345 14:33:07 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:08.345 14:33:07 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:08.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:08.345 14:33:07 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:08.345 14:33:07 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:08.345 14:33:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:08.345 [2024-11-04 14:33:07.383553] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:08.345 [2024-11-04 14:33:07.383721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57576 ] 00:07:08.914 [2024-11-04 14:33:07.839183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.914 [2024-11-04 14:33:07.955931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.859 14:33:08 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:09.859 14:33:08 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:07:09.859 00:07:09.859 14:33:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:09.859 INFO: shutting down applications... 00:07:09.859 14:33:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:09.860 14:33:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:09.860 14:33:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:09.860 14:33:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:09.860 14:33:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57576 ]] 00:07:09.860 14:33:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57576 00:07:09.860 14:33:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:09.860 14:33:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:09.860 14:33:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57576 00:07:09.860 14:33:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:10.118 14:33:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:10.118 14:33:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:10.118 14:33:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57576 00:07:10.118 14:33:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:10.692 14:33:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:10.692 14:33:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:10.692 14:33:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57576 00:07:10.692 14:33:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:11.264 14:33:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:11.264 14:33:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:11.264 14:33:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57576 00:07:11.264 14:33:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:11.831 14:33:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:11.831 14:33:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:11.831 14:33:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57576 00:07:11.831 14:33:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:12.090 14:33:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:12.090 14:33:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.090 14:33:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57576 00:07:12.090 14:33:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:12.658 14:33:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:12.658 14:33:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.658 14:33:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57576 00:07:12.658 14:33:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:12.658 14:33:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:12.658 14:33:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:12.658 14:33:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:12.658 SPDK target shutdown done 00:07:12.658 Success 00:07:12.658 14:33:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:12.658 00:07:12.658 real 0m4.570s 00:07:12.658 user 0m4.001s 00:07:12.658 sys 0m0.636s 00:07:12.658 14:33:11 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.658 14:33:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:12.658 ************************************ 00:07:12.658 END TEST json_config_extra_key 00:07:12.658 ************************************ 00:07:12.658 14:33:11 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:12.658 14:33:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:12.658 14:33:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.658 14:33:11 -- common/autotest_common.sh@10 -- # set +x 00:07:12.658 ************************************ 00:07:12.658 START TEST alias_rpc 00:07:12.658 ************************************ 00:07:12.658 14:33:11 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:12.918 * Looking for test storage... 00:07:12.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.918 14:33:11 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.918 --rc genhtml_branch_coverage=1 00:07:12.918 --rc genhtml_function_coverage=1 00:07:12.918 --rc genhtml_legend=1 00:07:12.918 --rc geninfo_all_blocks=1 00:07:12.918 --rc geninfo_unexecuted_blocks=1 00:07:12.918 00:07:12.918 ' 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.918 --rc genhtml_branch_coverage=1 00:07:12.918 --rc genhtml_function_coverage=1 00:07:12.918 --rc genhtml_legend=1 00:07:12.918 --rc geninfo_all_blocks=1 00:07:12.918 --rc geninfo_unexecuted_blocks=1 00:07:12.918 00:07:12.918 ' 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.918 --rc genhtml_branch_coverage=1 00:07:12.918 --rc genhtml_function_coverage=1 00:07:12.918 --rc genhtml_legend=1 00:07:12.918 --rc geninfo_all_blocks=1 00:07:12.918 --rc geninfo_unexecuted_blocks=1 00:07:12.918 00:07:12.918 ' 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.918 --rc genhtml_branch_coverage=1 00:07:12.918 --rc genhtml_function_coverage=1 00:07:12.918 --rc genhtml_legend=1 00:07:12.918 --rc geninfo_all_blocks=1 00:07:12.918 --rc geninfo_unexecuted_blocks=1 00:07:12.918 00:07:12.918 ' 00:07:12.918 14:33:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:12.918 14:33:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57693 00:07:12.918 14:33:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:12.918 14:33:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57693 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57693 ']' 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:12.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:12.918 14:33:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.177 [2024-11-04 14:33:12.039265] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:13.177 [2024-11-04 14:33:12.039478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57693 ] 00:07:13.177 [2024-11-04 14:33:12.219208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.435 [2024-11-04 14:33:12.346016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.371 14:33:13 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:14.371 14:33:13 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:14.371 14:33:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:14.630 14:33:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57693 00:07:14.630 14:33:13 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57693 ']' 00:07:14.630 14:33:13 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57693 00:07:14.630 14:33:13 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:07:14.630 14:33:13 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:14.630 14:33:13 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57693 00:07:14.630 14:33:13 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:14.630 14:33:13 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:14.630 killing process with pid 57693 00:07:14.630 14:33:13 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57693' 00:07:14.630 14:33:13 alias_rpc -- common/autotest_common.sh@971 -- # kill 57693 00:07:14.630 14:33:13 alias_rpc -- common/autotest_common.sh@976 -- # wait 57693 00:07:17.193 00:07:17.193 real 0m4.038s 00:07:17.193 user 0m4.246s 00:07:17.193 sys 0m0.602s 00:07:17.193 14:33:15 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.193 14:33:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.193 ************************************ 00:07:17.193 END TEST alias_rpc 00:07:17.193 ************************************ 00:07:17.193 14:33:15 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:17.193 14:33:15 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:17.193 14:33:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.193 14:33:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.193 14:33:15 -- common/autotest_common.sh@10 -- # set +x 00:07:17.193 ************************************ 00:07:17.193 START TEST spdkcli_tcp 00:07:17.193 ************************************ 00:07:17.193 14:33:15 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:17.193 * Looking for test storage... 00:07:17.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:17.193 14:33:15 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:17.194 14:33:15 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:07:17.194 14:33:15 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:17.194 14:33:15 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:17.194 14:33:15 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:17.194 14:33:16 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:17.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.194 --rc genhtml_branch_coverage=1 00:07:17.194 --rc genhtml_function_coverage=1 00:07:17.194 --rc genhtml_legend=1 00:07:17.194 --rc geninfo_all_blocks=1 00:07:17.194 --rc geninfo_unexecuted_blocks=1 00:07:17.194 00:07:17.194 ' 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:17.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.194 --rc genhtml_branch_coverage=1 00:07:17.194 --rc genhtml_function_coverage=1 00:07:17.194 --rc genhtml_legend=1 00:07:17.194 --rc geninfo_all_blocks=1 00:07:17.194 --rc geninfo_unexecuted_blocks=1 00:07:17.194 00:07:17.194 ' 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:17.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.194 --rc genhtml_branch_coverage=1 00:07:17.194 --rc genhtml_function_coverage=1 00:07:17.194 --rc genhtml_legend=1 00:07:17.194 --rc geninfo_all_blocks=1 00:07:17.194 --rc geninfo_unexecuted_blocks=1 00:07:17.194 00:07:17.194 ' 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:17.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:17.194 --rc genhtml_branch_coverage=1 00:07:17.194 --rc genhtml_function_coverage=1 00:07:17.194 --rc genhtml_legend=1 00:07:17.194 --rc geninfo_all_blocks=1 00:07:17.194 --rc geninfo_unexecuted_blocks=1 00:07:17.194 00:07:17.194 ' 00:07:17.194 14:33:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:17.194 14:33:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:17.194 14:33:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:17.194 14:33:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:17.194 14:33:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:17.194 14:33:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:17.194 14:33:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.194 14:33:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57800 00:07:17.194 14:33:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57800 00:07:17.194 14:33:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57800 ']' 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.194 14:33:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.194 [2024-11-04 14:33:16.146469] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:17.194 [2024-11-04 14:33:16.146685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57800 ] 00:07:17.453 [2024-11-04 14:33:16.332400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.453 [2024-11-04 14:33:16.479558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.453 [2024-11-04 14:33:16.479559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.468 14:33:17 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:18.468 14:33:17 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:07:18.468 14:33:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57817 00:07:18.468 14:33:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:18.468 14:33:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:18.728 [ 00:07:18.728 "bdev_malloc_delete", 00:07:18.728 "bdev_malloc_create", 00:07:18.728 "bdev_null_resize", 00:07:18.728 "bdev_null_delete", 00:07:18.728 "bdev_null_create", 00:07:18.728 "bdev_nvme_cuse_unregister", 00:07:18.728 "bdev_nvme_cuse_register", 00:07:18.728 "bdev_opal_new_user", 00:07:18.728 "bdev_opal_set_lock_state", 00:07:18.728 "bdev_opal_delete", 00:07:18.728 "bdev_opal_get_info", 00:07:18.728 "bdev_opal_create", 00:07:18.728 "bdev_nvme_opal_revert", 00:07:18.728 "bdev_nvme_opal_init", 00:07:18.728 "bdev_nvme_send_cmd", 00:07:18.728 "bdev_nvme_set_keys", 00:07:18.728 "bdev_nvme_get_path_iostat", 00:07:18.728 "bdev_nvme_get_mdns_discovery_info", 00:07:18.728 "bdev_nvme_stop_mdns_discovery", 00:07:18.728 "bdev_nvme_start_mdns_discovery", 00:07:18.728 "bdev_nvme_set_multipath_policy", 00:07:18.728 "bdev_nvme_set_preferred_path", 00:07:18.728 "bdev_nvme_get_io_paths", 00:07:18.728 "bdev_nvme_remove_error_injection", 00:07:18.728 "bdev_nvme_add_error_injection", 00:07:18.728 "bdev_nvme_get_discovery_info", 00:07:18.728 "bdev_nvme_stop_discovery", 00:07:18.728 "bdev_nvme_start_discovery", 00:07:18.728 "bdev_nvme_get_controller_health_info", 00:07:18.728 "bdev_nvme_disable_controller", 00:07:18.728 "bdev_nvme_enable_controller", 00:07:18.728 "bdev_nvme_reset_controller", 00:07:18.728 "bdev_nvme_get_transport_statistics", 00:07:18.728 "bdev_nvme_apply_firmware", 00:07:18.728 "bdev_nvme_detach_controller", 00:07:18.728 "bdev_nvme_get_controllers", 00:07:18.728 "bdev_nvme_attach_controller", 00:07:18.728 "bdev_nvme_set_hotplug", 00:07:18.728 "bdev_nvme_set_options", 00:07:18.728 "bdev_passthru_delete", 00:07:18.729 "bdev_passthru_create", 00:07:18.729 "bdev_lvol_set_parent_bdev", 00:07:18.729 "bdev_lvol_set_parent", 00:07:18.729 "bdev_lvol_check_shallow_copy", 00:07:18.729 "bdev_lvol_start_shallow_copy", 00:07:18.729 "bdev_lvol_grow_lvstore", 00:07:18.729 "bdev_lvol_get_lvols", 00:07:18.729 "bdev_lvol_get_lvstores", 00:07:18.729 "bdev_lvol_delete", 00:07:18.729 "bdev_lvol_set_read_only", 00:07:18.729 "bdev_lvol_resize", 00:07:18.729 "bdev_lvol_decouple_parent", 00:07:18.729 "bdev_lvol_inflate", 00:07:18.729 "bdev_lvol_rename", 00:07:18.729 "bdev_lvol_clone_bdev", 00:07:18.729 "bdev_lvol_clone", 00:07:18.729 "bdev_lvol_snapshot", 00:07:18.729 "bdev_lvol_create", 00:07:18.729 "bdev_lvol_delete_lvstore", 00:07:18.729 "bdev_lvol_rename_lvstore", 00:07:18.729 "bdev_lvol_create_lvstore", 00:07:18.729 "bdev_raid_set_options", 00:07:18.729 "bdev_raid_remove_base_bdev", 00:07:18.729 "bdev_raid_add_base_bdev", 00:07:18.729 "bdev_raid_delete", 00:07:18.729 "bdev_raid_create", 00:07:18.729 "bdev_raid_get_bdevs", 00:07:18.729 "bdev_error_inject_error", 00:07:18.729 "bdev_error_delete", 00:07:18.729 "bdev_error_create", 00:07:18.729 "bdev_split_delete", 00:07:18.729 "bdev_split_create", 00:07:18.729 "bdev_delay_delete", 00:07:18.729 "bdev_delay_create", 00:07:18.729 "bdev_delay_update_latency", 00:07:18.729 "bdev_zone_block_delete", 00:07:18.729 "bdev_zone_block_create", 00:07:18.729 "blobfs_create", 00:07:18.729 "blobfs_detect", 00:07:18.729 "blobfs_set_cache_size", 00:07:18.729 "bdev_aio_delete", 00:07:18.729 "bdev_aio_rescan", 00:07:18.729 "bdev_aio_create", 00:07:18.729 "bdev_ftl_set_property", 00:07:18.729 "bdev_ftl_get_properties", 00:07:18.729 "bdev_ftl_get_stats", 00:07:18.729 "bdev_ftl_unmap", 00:07:18.729 "bdev_ftl_unload", 00:07:18.729 "bdev_ftl_delete", 00:07:18.729 "bdev_ftl_load", 00:07:18.729 "bdev_ftl_create", 00:07:18.729 "bdev_virtio_attach_controller", 00:07:18.729 "bdev_virtio_scsi_get_devices", 00:07:18.729 "bdev_virtio_detach_controller", 00:07:18.729 "bdev_virtio_blk_set_hotplug", 00:07:18.729 "bdev_iscsi_delete", 00:07:18.729 "bdev_iscsi_create", 00:07:18.729 "bdev_iscsi_set_options", 00:07:18.729 "accel_error_inject_error", 00:07:18.729 "ioat_scan_accel_module", 00:07:18.729 "dsa_scan_accel_module", 00:07:18.729 "iaa_scan_accel_module", 00:07:18.729 "keyring_file_remove_key", 00:07:18.729 "keyring_file_add_key", 00:07:18.729 "keyring_linux_set_options", 00:07:18.729 "fsdev_aio_delete", 00:07:18.729 "fsdev_aio_create", 00:07:18.729 "iscsi_get_histogram", 00:07:18.729 "iscsi_enable_histogram", 00:07:18.729 "iscsi_set_options", 00:07:18.729 "iscsi_get_auth_groups", 00:07:18.729 "iscsi_auth_group_remove_secret", 00:07:18.729 "iscsi_auth_group_add_secret", 00:07:18.729 "iscsi_delete_auth_group", 00:07:18.729 "iscsi_create_auth_group", 00:07:18.729 "iscsi_set_discovery_auth", 00:07:18.729 "iscsi_get_options", 00:07:18.729 "iscsi_target_node_request_logout", 00:07:18.729 "iscsi_target_node_set_redirect", 00:07:18.729 "iscsi_target_node_set_auth", 00:07:18.729 "iscsi_target_node_add_lun", 00:07:18.729 "iscsi_get_stats", 00:07:18.729 "iscsi_get_connections", 00:07:18.729 "iscsi_portal_group_set_auth", 00:07:18.729 "iscsi_start_portal_group", 00:07:18.729 "iscsi_delete_portal_group", 00:07:18.729 "iscsi_create_portal_group", 00:07:18.729 "iscsi_get_portal_groups", 00:07:18.729 "iscsi_delete_target_node", 00:07:18.729 "iscsi_target_node_remove_pg_ig_maps", 00:07:18.729 "iscsi_target_node_add_pg_ig_maps", 00:07:18.729 "iscsi_create_target_node", 00:07:18.729 "iscsi_get_target_nodes", 00:07:18.729 "iscsi_delete_initiator_group", 00:07:18.729 "iscsi_initiator_group_remove_initiators", 00:07:18.729 "iscsi_initiator_group_add_initiators", 00:07:18.729 "iscsi_create_initiator_group", 00:07:18.729 "iscsi_get_initiator_groups", 00:07:18.729 "nvmf_set_crdt", 00:07:18.729 "nvmf_set_config", 00:07:18.729 "nvmf_set_max_subsystems", 00:07:18.729 "nvmf_stop_mdns_prr", 00:07:18.729 "nvmf_publish_mdns_prr", 00:07:18.729 "nvmf_subsystem_get_listeners", 00:07:18.729 "nvmf_subsystem_get_qpairs", 00:07:18.729 "nvmf_subsystem_get_controllers", 00:07:18.729 "nvmf_get_stats", 00:07:18.729 "nvmf_get_transports", 00:07:18.729 "nvmf_create_transport", 00:07:18.729 "nvmf_get_targets", 00:07:18.729 "nvmf_delete_target", 00:07:18.729 "nvmf_create_target", 00:07:18.729 "nvmf_subsystem_allow_any_host", 00:07:18.729 "nvmf_subsystem_set_keys", 00:07:18.729 "nvmf_subsystem_remove_host", 00:07:18.729 "nvmf_subsystem_add_host", 00:07:18.729 "nvmf_ns_remove_host", 00:07:18.729 "nvmf_ns_add_host", 00:07:18.729 "nvmf_subsystem_remove_ns", 00:07:18.729 "nvmf_subsystem_set_ns_ana_group", 00:07:18.729 "nvmf_subsystem_add_ns", 00:07:18.729 "nvmf_subsystem_listener_set_ana_state", 00:07:18.729 "nvmf_discovery_get_referrals", 00:07:18.729 "nvmf_discovery_remove_referral", 00:07:18.729 "nvmf_discovery_add_referral", 00:07:18.729 "nvmf_subsystem_remove_listener", 00:07:18.729 "nvmf_subsystem_add_listener", 00:07:18.729 "nvmf_delete_subsystem", 00:07:18.729 "nvmf_create_subsystem", 00:07:18.729 "nvmf_get_subsystems", 00:07:18.729 "env_dpdk_get_mem_stats", 00:07:18.729 "nbd_get_disks", 00:07:18.729 "nbd_stop_disk", 00:07:18.729 "nbd_start_disk", 00:07:18.729 "ublk_recover_disk", 00:07:18.729 "ublk_get_disks", 00:07:18.729 "ublk_stop_disk", 00:07:18.729 "ublk_start_disk", 00:07:18.729 "ublk_destroy_target", 00:07:18.729 "ublk_create_target", 00:07:18.729 "virtio_blk_create_transport", 00:07:18.729 "virtio_blk_get_transports", 00:07:18.729 "vhost_controller_set_coalescing", 00:07:18.729 "vhost_get_controllers", 00:07:18.729 "vhost_delete_controller", 00:07:18.729 "vhost_create_blk_controller", 00:07:18.729 "vhost_scsi_controller_remove_target", 00:07:18.729 "vhost_scsi_controller_add_target", 00:07:18.729 "vhost_start_scsi_controller", 00:07:18.729 "vhost_create_scsi_controller", 00:07:18.729 "thread_set_cpumask", 00:07:18.729 "scheduler_set_options", 00:07:18.729 "framework_get_governor", 00:07:18.729 "framework_get_scheduler", 00:07:18.730 "framework_set_scheduler", 00:07:18.730 "framework_get_reactors", 00:07:18.730 "thread_get_io_channels", 00:07:18.730 "thread_get_pollers", 00:07:18.730 "thread_get_stats", 00:07:18.730 "framework_monitor_context_switch", 00:07:18.730 "spdk_kill_instance", 00:07:18.730 "log_enable_timestamps", 00:07:18.730 "log_get_flags", 00:07:18.730 "log_clear_flag", 00:07:18.730 "log_set_flag", 00:07:18.730 "log_get_level", 00:07:18.730 "log_set_level", 00:07:18.730 "log_get_print_level", 00:07:18.730 "log_set_print_level", 00:07:18.730 "framework_enable_cpumask_locks", 00:07:18.730 "framework_disable_cpumask_locks", 00:07:18.730 "framework_wait_init", 00:07:18.730 "framework_start_init", 00:07:18.730 "scsi_get_devices", 00:07:18.730 "bdev_get_histogram", 00:07:18.730 "bdev_enable_histogram", 00:07:18.730 "bdev_set_qos_limit", 00:07:18.730 "bdev_set_qd_sampling_period", 00:07:18.730 "bdev_get_bdevs", 00:07:18.730 "bdev_reset_iostat", 00:07:18.730 "bdev_get_iostat", 00:07:18.730 "bdev_examine", 00:07:18.730 "bdev_wait_for_examine", 00:07:18.730 "bdev_set_options", 00:07:18.730 "accel_get_stats", 00:07:18.730 "accel_set_options", 00:07:18.730 "accel_set_driver", 00:07:18.730 "accel_crypto_key_destroy", 00:07:18.730 "accel_crypto_keys_get", 00:07:18.730 "accel_crypto_key_create", 00:07:18.730 "accel_assign_opc", 00:07:18.730 "accel_get_module_info", 00:07:18.730 "accel_get_opc_assignments", 00:07:18.730 "vmd_rescan", 00:07:18.730 "vmd_remove_device", 00:07:18.730 "vmd_enable", 00:07:18.730 "sock_get_default_impl", 00:07:18.730 "sock_set_default_impl", 00:07:18.730 "sock_impl_set_options", 00:07:18.730 "sock_impl_get_options", 00:07:18.730 "iobuf_get_stats", 00:07:18.730 "iobuf_set_options", 00:07:18.730 "keyring_get_keys", 00:07:18.730 "framework_get_pci_devices", 00:07:18.730 "framework_get_config", 00:07:18.730 "framework_get_subsystems", 00:07:18.730 "fsdev_set_opts", 00:07:18.730 "fsdev_get_opts", 00:07:18.730 "trace_get_info", 00:07:18.730 "trace_get_tpoint_group_mask", 00:07:18.730 "trace_disable_tpoint_group", 00:07:18.730 "trace_enable_tpoint_group", 00:07:18.730 "trace_clear_tpoint_mask", 00:07:18.730 "trace_set_tpoint_mask", 00:07:18.730 "notify_get_notifications", 00:07:18.730 "notify_get_types", 00:07:18.730 "spdk_get_version", 00:07:18.730 "rpc_get_methods" 00:07:18.730 ] 00:07:18.730 14:33:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:18.730 14:33:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:18.730 14:33:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57800 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57800 ']' 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57800 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57800 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:18.730 killing process with pid 57800 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57800' 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57800 00:07:18.730 14:33:17 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57800 00:07:21.273 00:07:21.273 real 0m4.066s 00:07:21.274 user 0m7.297s 00:07:21.274 sys 0m0.641s 00:07:21.274 14:33:19 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.274 14:33:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.274 ************************************ 00:07:21.274 END TEST spdkcli_tcp 00:07:21.274 ************************************ 00:07:21.274 14:33:19 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:21.274 14:33:19 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:21.274 14:33:19 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.274 14:33:19 -- common/autotest_common.sh@10 -- # set +x 00:07:21.274 ************************************ 00:07:21.274 START TEST dpdk_mem_utility 00:07:21.274 ************************************ 00:07:21.274 14:33:19 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:21.274 * Looking for test storage... 00:07:21.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:21.274 14:33:19 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:21.274 14:33:19 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:07:21.274 14:33:19 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.274 14:33:20 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:21.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.274 --rc genhtml_branch_coverage=1 00:07:21.274 --rc genhtml_function_coverage=1 00:07:21.274 --rc genhtml_legend=1 00:07:21.274 --rc geninfo_all_blocks=1 00:07:21.274 --rc geninfo_unexecuted_blocks=1 00:07:21.274 00:07:21.274 ' 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:21.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.274 --rc genhtml_branch_coverage=1 00:07:21.274 --rc genhtml_function_coverage=1 00:07:21.274 --rc genhtml_legend=1 00:07:21.274 --rc geninfo_all_blocks=1 00:07:21.274 --rc geninfo_unexecuted_blocks=1 00:07:21.274 00:07:21.274 ' 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:21.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.274 --rc genhtml_branch_coverage=1 00:07:21.274 --rc genhtml_function_coverage=1 00:07:21.274 --rc genhtml_legend=1 00:07:21.274 --rc geninfo_all_blocks=1 00:07:21.274 --rc geninfo_unexecuted_blocks=1 00:07:21.274 00:07:21.274 ' 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:21.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.274 --rc genhtml_branch_coverage=1 00:07:21.274 --rc genhtml_function_coverage=1 00:07:21.274 --rc genhtml_legend=1 00:07:21.274 --rc geninfo_all_blocks=1 00:07:21.274 --rc geninfo_unexecuted_blocks=1 00:07:21.274 00:07:21.274 ' 00:07:21.274 14:33:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:21.274 14:33:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57922 00:07:21.274 14:33:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:21.274 14:33:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57922 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57922 ']' 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:21.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:21.274 14:33:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:21.274 [2024-11-04 14:33:20.175120] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:21.274 [2024-11-04 14:33:20.175294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57922 ] 00:07:21.274 [2024-11-04 14:33:20.348083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.533 [2024-11-04 14:33:20.481634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.542 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.542 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:07:22.542 14:33:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:22.542 14:33:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:22.542 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.542 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:22.542 { 00:07:22.542 "filename": "/tmp/spdk_mem_dump.txt" 00:07:22.542 } 00:07:22.542 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.542 14:33:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:22.542 DPDK memory size 816.000000 MiB in 1 heap(s) 00:07:22.542 1 heaps totaling size 816.000000 MiB 00:07:22.542 size: 816.000000 MiB heap id: 0 00:07:22.542 end heaps---------- 00:07:22.542 9 mempools totaling size 595.772034 MiB 00:07:22.542 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:22.542 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:22.542 size: 92.545471 MiB name: bdev_io_57922 00:07:22.542 size: 50.003479 MiB name: msgpool_57922 00:07:22.542 size: 36.509338 MiB name: fsdev_io_57922 00:07:22.542 size: 21.763794 MiB name: PDU_Pool 00:07:22.542 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:22.542 size: 4.133484 MiB name: evtpool_57922 00:07:22.542 size: 0.026123 MiB name: Session_Pool 00:07:22.542 end mempools------- 00:07:22.542 6 memzones totaling size 4.142822 MiB 00:07:22.542 size: 1.000366 MiB name: RG_ring_0_57922 00:07:22.542 size: 1.000366 MiB name: RG_ring_1_57922 00:07:22.542 size: 1.000366 MiB name: RG_ring_4_57922 00:07:22.542 size: 1.000366 MiB name: RG_ring_5_57922 00:07:22.542 size: 0.125366 MiB name: RG_ring_2_57922 00:07:22.542 size: 0.015991 MiB name: RG_ring_3_57922 00:07:22.542 end memzones------- 00:07:22.542 14:33:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:22.542 heap id: 0 total size: 816.000000 MiB number of busy elements: 309 number of free elements: 18 00:07:22.542 list of free elements. size: 16.792847 MiB 00:07:22.542 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:22.542 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:22.542 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:22.542 element at address: 0x200018d00040 with size: 0.999939 MiB 00:07:22.542 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:22.542 element at address: 0x200019200000 with size: 0.999084 MiB 00:07:22.542 element at address: 0x200031e00000 with size: 0.994324 MiB 00:07:22.542 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:22.542 element at address: 0x200018a00000 with size: 0.959656 MiB 00:07:22.542 element at address: 0x200019500040 with size: 0.936401 MiB 00:07:22.542 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:22.542 element at address: 0x20001ac00000 with size: 0.563171 MiB 00:07:22.542 element at address: 0x200000c00000 with size: 0.490173 MiB 00:07:22.542 element at address: 0x200018e00000 with size: 0.487976 MiB 00:07:22.542 element at address: 0x200019600000 with size: 0.485413 MiB 00:07:22.542 element at address: 0x200012c00000 with size: 0.443481 MiB 00:07:22.542 element at address: 0x200028000000 with size: 0.390442 MiB 00:07:22.542 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:22.542 list of standard malloc elements. size: 199.286255 MiB 00:07:22.542 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:22.542 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:22.542 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:07:22.542 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:22.542 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:22.542 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:22.542 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:07:22.542 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:22.542 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:22.542 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:07:22.542 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:22.542 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:22.542 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:22.543 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012c71880 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012c71980 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012c72080 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012c72180 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:22.543 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:07:22.543 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:07:22.543 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:07:22.544 element at address: 0x200028063f40 with size: 0.000244 MiB 00:07:22.544 element at address: 0x200028064040 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806af80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806b080 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806b180 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806b280 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806b380 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806b480 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806b580 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806b680 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806b780 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806b880 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806b980 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806be80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806c080 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806c180 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806c280 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806c380 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806c480 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806c580 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806c680 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806c780 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806c880 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806c980 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806d080 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806d180 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806d280 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806d380 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806d480 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806d580 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806d680 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806d780 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806d880 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806d980 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806da80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806db80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806de80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806df80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806e080 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806e180 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806e280 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806e380 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806e480 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806e580 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806e680 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806e780 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806e880 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806e980 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806f080 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806f180 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806f280 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806f380 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806f480 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806f580 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806f680 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806f780 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806f880 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806f980 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:07:22.544 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:07:22.544 list of memzone associated elements. size: 599.920898 MiB 00:07:22.544 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:07:22.544 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:22.544 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:07:22.544 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:22.544 element at address: 0x200012df4740 with size: 92.045105 MiB 00:07:22.544 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57922_0 00:07:22.544 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:22.544 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57922_0 00:07:22.544 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:22.545 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57922_0 00:07:22.545 element at address: 0x2000197be900 with size: 20.255615 MiB 00:07:22.545 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:22.545 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:07:22.545 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:22.545 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:22.545 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57922_0 00:07:22.545 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:22.545 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57922 00:07:22.545 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:22.545 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57922 00:07:22.545 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:22.545 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:22.545 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:07:22.545 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:22.545 element at address: 0x200018afde00 with size: 1.008179 MiB 00:07:22.545 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:22.545 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:07:22.545 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:22.545 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:22.545 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57922 00:07:22.545 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:22.545 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57922 00:07:22.545 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:07:22.545 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57922 00:07:22.545 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:07:22.545 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57922 00:07:22.545 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:22.545 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57922 00:07:22.545 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:22.545 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57922 00:07:22.545 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:07:22.545 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:22.545 element at address: 0x200012c72280 with size: 0.500549 MiB 00:07:22.545 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:22.545 element at address: 0x20001967c440 with size: 0.250549 MiB 00:07:22.545 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:22.545 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:22.545 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57922 00:07:22.545 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:22.545 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57922 00:07:22.545 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:07:22.545 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:22.545 element at address: 0x200028064140 with size: 0.023804 MiB 00:07:22.545 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:22.545 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:22.545 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57922 00:07:22.545 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:07:22.545 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:22.545 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:22.545 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57922 00:07:22.545 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:22.545 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57922 00:07:22.545 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:22.545 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57922 00:07:22.545 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:07:22.545 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:22.545 14:33:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:22.545 14:33:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57922 00:07:22.545 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57922 ']' 00:07:22.545 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57922 00:07:22.545 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:07:22.545 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:22.545 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57922 00:07:22.545 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:22.545 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:22.545 killing process with pid 57922 00:07:22.545 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57922' 00:07:22.545 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57922 00:07:22.545 14:33:21 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57922 00:07:25.079 00:07:25.079 real 0m3.818s 00:07:25.079 user 0m3.845s 00:07:25.079 sys 0m0.570s 00:07:25.079 14:33:23 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:25.079 ************************************ 00:07:25.079 END TEST dpdk_mem_utility 00:07:25.079 ************************************ 00:07:25.079 14:33:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:25.079 14:33:23 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:25.079 14:33:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:25.079 14:33:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.079 14:33:23 -- common/autotest_common.sh@10 -- # set +x 00:07:25.079 ************************************ 00:07:25.079 START TEST event 00:07:25.079 ************************************ 00:07:25.079 14:33:23 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:25.079 * Looking for test storage... 00:07:25.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:25.079 14:33:23 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:25.079 14:33:23 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:25.079 14:33:23 event -- common/autotest_common.sh@1691 -- # lcov --version 00:07:25.079 14:33:23 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:25.079 14:33:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.079 14:33:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.079 14:33:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.079 14:33:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.079 14:33:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.079 14:33:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.079 14:33:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.079 14:33:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.079 14:33:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.079 14:33:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.079 14:33:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.080 14:33:23 event -- scripts/common.sh@344 -- # case "$op" in 00:07:25.080 14:33:23 event -- scripts/common.sh@345 -- # : 1 00:07:25.080 14:33:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.080 14:33:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.080 14:33:23 event -- scripts/common.sh@365 -- # decimal 1 00:07:25.080 14:33:23 event -- scripts/common.sh@353 -- # local d=1 00:07:25.080 14:33:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.080 14:33:23 event -- scripts/common.sh@355 -- # echo 1 00:07:25.080 14:33:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.080 14:33:23 event -- scripts/common.sh@366 -- # decimal 2 00:07:25.080 14:33:23 event -- scripts/common.sh@353 -- # local d=2 00:07:25.080 14:33:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.080 14:33:23 event -- scripts/common.sh@355 -- # echo 2 00:07:25.080 14:33:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.080 14:33:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.080 14:33:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.080 14:33:23 event -- scripts/common.sh@368 -- # return 0 00:07:25.080 14:33:23 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.080 14:33:23 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:25.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.080 --rc genhtml_branch_coverage=1 00:07:25.080 --rc genhtml_function_coverage=1 00:07:25.080 --rc genhtml_legend=1 00:07:25.080 --rc geninfo_all_blocks=1 00:07:25.080 --rc geninfo_unexecuted_blocks=1 00:07:25.080 00:07:25.080 ' 00:07:25.080 14:33:23 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:25.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.080 --rc genhtml_branch_coverage=1 00:07:25.080 --rc genhtml_function_coverage=1 00:07:25.080 --rc genhtml_legend=1 00:07:25.080 --rc geninfo_all_blocks=1 00:07:25.080 --rc geninfo_unexecuted_blocks=1 00:07:25.080 00:07:25.080 ' 00:07:25.080 14:33:23 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:25.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.080 --rc genhtml_branch_coverage=1 00:07:25.080 --rc genhtml_function_coverage=1 00:07:25.080 --rc genhtml_legend=1 00:07:25.080 --rc geninfo_all_blocks=1 00:07:25.080 --rc geninfo_unexecuted_blocks=1 00:07:25.080 00:07:25.080 ' 00:07:25.080 14:33:23 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:25.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.080 --rc genhtml_branch_coverage=1 00:07:25.080 --rc genhtml_function_coverage=1 00:07:25.080 --rc genhtml_legend=1 00:07:25.080 --rc geninfo_all_blocks=1 00:07:25.080 --rc geninfo_unexecuted_blocks=1 00:07:25.080 00:07:25.080 ' 00:07:25.080 14:33:23 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:25.080 14:33:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:25.080 14:33:23 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:25.080 14:33:23 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:07:25.080 14:33:23 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:25.080 14:33:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.080 ************************************ 00:07:25.080 START TEST event_perf 00:07:25.080 ************************************ 00:07:25.080 14:33:23 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:25.080 Running I/O for 1 seconds...[2024-11-04 14:33:24.019470] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:25.080 [2024-11-04 14:33:24.020339] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58025 ] 00:07:25.345 [2024-11-04 14:33:24.206704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.345 [2024-11-04 14:33:24.370793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.345 [2024-11-04 14:33:24.370977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.345 [2024-11-04 14:33:24.371072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.345 [2024-11-04 14:33:24.371270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.724 Running I/O for 1 seconds... 00:07:26.724 lcore 0: 164297 00:07:26.724 lcore 1: 164296 00:07:26.724 lcore 2: 164296 00:07:26.724 lcore 3: 164295 00:07:26.724 done. 00:07:26.724 00:07:26.724 real 0m1.654s 00:07:26.724 user 0m4.393s 00:07:26.724 sys 0m0.132s 00:07:26.724 14:33:25 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.724 14:33:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.724 ************************************ 00:07:26.724 END TEST event_perf 00:07:26.724 ************************************ 00:07:26.724 14:33:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:26.724 14:33:25 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:26.724 14:33:25 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.724 14:33:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.724 ************************************ 00:07:26.724 START TEST event_reactor 00:07:26.724 ************************************ 00:07:26.724 14:33:25 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:26.724 [2024-11-04 14:33:25.718313] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:26.724 [2024-11-04 14:33:25.719002] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58064 ] 00:07:26.982 [2024-11-04 14:33:25.906340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.982 [2024-11-04 14:33:26.056658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.359 test_start 00:07:28.359 oneshot 00:07:28.359 tick 100 00:07:28.359 tick 100 00:07:28.359 tick 250 00:07:28.359 tick 100 00:07:28.359 tick 100 00:07:28.359 tick 100 00:07:28.359 tick 250 00:07:28.359 tick 500 00:07:28.359 tick 100 00:07:28.359 tick 100 00:07:28.359 tick 250 00:07:28.359 tick 100 00:07:28.359 tick 100 00:07:28.359 test_end 00:07:28.359 00:07:28.359 real 0m1.631s 00:07:28.359 user 0m1.404s 00:07:28.359 sys 0m0.116s 00:07:28.359 14:33:27 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.359 14:33:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 ************************************ 00:07:28.359 END TEST event_reactor 00:07:28.359 ************************************ 00:07:28.359 14:33:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:28.359 14:33:27 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:07:28.359 14:33:27 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.359 14:33:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:28.359 ************************************ 00:07:28.359 START TEST event_reactor_perf 00:07:28.359 ************************************ 00:07:28.359 14:33:27 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:28.359 [2024-11-04 14:33:27.401851] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:28.359 [2024-11-04 14:33:27.402041] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58106 ] 00:07:28.626 [2024-11-04 14:33:27.586024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.884 [2024-11-04 14:33:27.750845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.261 test_start 00:07:30.261 test_end 00:07:30.261 Performance: 278254 events per second 00:07:30.261 00:07:30.261 real 0m1.610s 00:07:30.261 user 0m1.388s 00:07:30.261 sys 0m0.112s 00:07:30.261 14:33:28 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:30.261 14:33:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:30.261 ************************************ 00:07:30.261 END TEST event_reactor_perf 00:07:30.261 ************************************ 00:07:30.261 14:33:29 event -- event/event.sh@49 -- # uname -s 00:07:30.261 14:33:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:30.261 14:33:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:30.261 14:33:29 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:30.261 14:33:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.261 14:33:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.261 ************************************ 00:07:30.261 START TEST event_scheduler 00:07:30.261 ************************************ 00:07:30.261 14:33:29 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:30.261 * Looking for test storage... 00:07:30.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:30.261 14:33:29 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:30.261 14:33:29 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:30.261 14:33:29 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:07:30.261 14:33:29 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.261 14:33:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:30.261 14:33:29 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.261 14:33:29 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:30.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.261 --rc genhtml_branch_coverage=1 00:07:30.261 --rc genhtml_function_coverage=1 00:07:30.261 --rc genhtml_legend=1 00:07:30.262 --rc geninfo_all_blocks=1 00:07:30.262 --rc geninfo_unexecuted_blocks=1 00:07:30.262 00:07:30.262 ' 00:07:30.262 14:33:29 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:30.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.262 --rc genhtml_branch_coverage=1 00:07:30.262 --rc genhtml_function_coverage=1 00:07:30.262 --rc genhtml_legend=1 00:07:30.262 --rc geninfo_all_blocks=1 00:07:30.262 --rc geninfo_unexecuted_blocks=1 00:07:30.262 00:07:30.262 ' 00:07:30.262 14:33:29 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:30.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.262 --rc genhtml_branch_coverage=1 00:07:30.262 --rc genhtml_function_coverage=1 00:07:30.262 --rc genhtml_legend=1 00:07:30.262 --rc geninfo_all_blocks=1 00:07:30.262 --rc geninfo_unexecuted_blocks=1 00:07:30.262 00:07:30.262 ' 00:07:30.262 14:33:29 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:30.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.262 --rc genhtml_branch_coverage=1 00:07:30.262 --rc genhtml_function_coverage=1 00:07:30.262 --rc genhtml_legend=1 00:07:30.262 --rc geninfo_all_blocks=1 00:07:30.262 --rc geninfo_unexecuted_blocks=1 00:07:30.262 00:07:30.262 ' 00:07:30.262 14:33:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:30.262 14:33:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58177 00:07:30.262 14:33:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:30.262 14:33:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:30.262 14:33:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58177 00:07:30.262 14:33:29 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58177 ']' 00:07:30.262 14:33:29 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.262 14:33:29 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.262 14:33:29 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.262 14:33:29 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.262 14:33:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:30.262 [2024-11-04 14:33:29.299953] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:30.262 [2024-11-04 14:33:29.300147] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58177 ] 00:07:30.521 [2024-11-04 14:33:29.484576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.521 [2024-11-04 14:33:29.620156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.521 [2024-11-04 14:33:29.620238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.521 [2024-11-04 14:33:29.620347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.521 [2024-11-04 14:33:29.620349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.456 14:33:30 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:31.456 14:33:30 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:07:31.456 14:33:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:31.456 14:33:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.456 14:33:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:31.456 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:31.456 POWER: Cannot set governor of lcore 0 to userspace 00:07:31.456 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:31.456 POWER: Cannot set governor of lcore 0 to performance 00:07:31.456 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:31.456 POWER: Cannot set governor of lcore 0 to userspace 00:07:31.456 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:31.456 POWER: Cannot set governor of lcore 0 to userspace 00:07:31.456 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:31.456 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:31.456 POWER: Unable to set Power Management Environment for lcore 0 00:07:31.456 [2024-11-04 14:33:30.386746] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:31.457 [2024-11-04 14:33:30.386788] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:31.457 [2024-11-04 14:33:30.386812] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:31.457 [2024-11-04 14:33:30.386855] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:31.457 [2024-11-04 14:33:30.386876] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:31.457 [2024-11-04 14:33:30.386899] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:31.457 14:33:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.457 14:33:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:31.457 14:33:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.457 14:33:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:31.719 [2024-11-04 14:33:30.714430] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:31.719 14:33:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.719 14:33:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:31.719 14:33:30 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.719 14:33:30 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.719 14:33:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:31.719 ************************************ 00:07:31.719 START TEST scheduler_create_thread 00:07:31.719 ************************************ 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 2 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 3 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 4 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 5 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 6 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 7 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 8 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 9 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 10 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.720 14:33:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.636 14:33:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.636 14:33:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:33.636 14:33:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:33.636 14:33:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.636 14:33:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.572 14:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.572 00:07:34.572 real 0m2.620s 00:07:34.572 user 0m0.018s 00:07:34.572 sys 0m0.005s 00:07:34.572 14:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:34.572 ************************************ 00:07:34.572 END TEST scheduler_create_thread 00:07:34.572 ************************************ 00:07:34.572 14:33:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.572 14:33:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:34.572 14:33:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58177 00:07:34.572 14:33:33 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58177 ']' 00:07:34.572 14:33:33 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58177 00:07:34.572 14:33:33 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:07:34.572 14:33:33 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:34.572 14:33:33 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58177 00:07:34.572 killing process with pid 58177 00:07:34.572 14:33:33 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:34.572 14:33:33 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:34.572 14:33:33 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58177' 00:07:34.572 14:33:33 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58177 00:07:34.572 14:33:33 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58177 00:07:34.830 [2024-11-04 14:33:33.825580] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:36.206 00:07:36.207 real 0m5.890s 00:07:36.207 user 0m10.689s 00:07:36.207 sys 0m0.511s 00:07:36.207 14:33:34 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:36.207 14:33:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:36.207 ************************************ 00:07:36.207 END TEST event_scheduler 00:07:36.207 ************************************ 00:07:36.207 14:33:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:36.207 14:33:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:36.207 14:33:34 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:36.207 14:33:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:36.207 14:33:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:36.207 ************************************ 00:07:36.207 START TEST app_repeat 00:07:36.207 ************************************ 00:07:36.207 14:33:34 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58288 00:07:36.207 Process app_repeat pid: 58288 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58288' 00:07:36.207 spdk_app_start Round 0 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58288 /var/tmp/spdk-nbd.sock 00:07:36.207 14:33:34 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58288 ']' 00:07:36.207 14:33:34 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.207 14:33:34 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:36.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.207 14:33:34 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.207 14:33:34 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:36.207 14:33:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.207 14:33:34 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:36.207 [2024-11-04 14:33:35.021792] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:36.207 [2024-11-04 14:33:35.021961] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58288 ] 00:07:36.207 [2024-11-04 14:33:35.195125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:36.207 [2024-11-04 14:33:35.327980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.207 [2024-11-04 14:33:35.328008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.142 14:33:36 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.142 14:33:36 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:37.142 14:33:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:37.708 Malloc0 00:07:37.708 14:33:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:37.966 Malloc1 00:07:37.966 14:33:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:37.966 14:33:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:38.230 /dev/nbd0 00:07:38.488 14:33:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:38.488 14:33:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.488 1+0 records in 00:07:38.488 1+0 records out 00:07:38.488 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367851 s, 11.1 MB/s 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:38.488 14:33:37 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:38.488 14:33:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.488 14:33:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.488 14:33:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:38.747 /dev/nbd1 00:07:38.747 14:33:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:38.747 14:33:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:38.747 1+0 records in 00:07:38.747 1+0 records out 00:07:38.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350379 s, 11.7 MB/s 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:38.747 14:33:37 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:38.747 14:33:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:38.747 14:33:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:38.747 14:33:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.747 14:33:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.747 14:33:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:39.005 14:33:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:39.005 { 00:07:39.005 "nbd_device": "/dev/nbd0", 00:07:39.005 "bdev_name": "Malloc0" 00:07:39.005 }, 00:07:39.005 { 00:07:39.005 "nbd_device": "/dev/nbd1", 00:07:39.005 "bdev_name": "Malloc1" 00:07:39.005 } 00:07:39.005 ]' 00:07:39.005 14:33:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:39.005 { 00:07:39.005 "nbd_device": "/dev/nbd0", 00:07:39.005 "bdev_name": "Malloc0" 00:07:39.005 }, 00:07:39.005 { 00:07:39.005 "nbd_device": "/dev/nbd1", 00:07:39.005 "bdev_name": "Malloc1" 00:07:39.005 } 00:07:39.005 ]' 00:07:39.005 14:33:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:39.005 /dev/nbd1' 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:39.005 /dev/nbd1' 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:39.005 256+0 records in 00:07:39.005 256+0 records out 00:07:39.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102074 s, 103 MB/s 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:39.005 256+0 records in 00:07:39.005 256+0 records out 00:07:39.005 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242486 s, 43.2 MB/s 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:39.005 14:33:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:39.264 256+0 records in 00:07:39.265 256+0 records out 00:07:39.265 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330509 s, 31.7 MB/s 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.265 14:33:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:39.524 14:33:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:39.524 14:33:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:39.524 14:33:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:39.524 14:33:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.524 14:33:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.524 14:33:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:39.524 14:33:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.524 14:33:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.524 14:33:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:39.524 14:33:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:39.783 14:33:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:40.054 14:33:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:40.054 14:33:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:40.629 14:33:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:41.563 [2024-11-04 14:33:40.658696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.821 [2024-11-04 14:33:40.787177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.821 [2024-11-04 14:33:40.787187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.079 [2024-11-04 14:33:40.976939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:42.080 [2024-11-04 14:33:40.977061] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:43.981 14:33:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:43.981 14:33:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:43.981 spdk_app_start Round 1 00:07:43.981 14:33:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58288 /var/tmp/spdk-nbd.sock 00:07:43.981 14:33:42 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58288 ']' 00:07:43.981 14:33:42 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.981 14:33:42 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:43.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.981 14:33:42 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.981 14:33:42 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:43.981 14:33:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:43.981 14:33:42 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.981 14:33:42 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:43.981 14:33:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.238 Malloc0 00:07:44.238 14:33:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:44.496 Malloc1 00:07:44.496 14:33:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:44.496 14:33:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:45.063 /dev/nbd0 00:07:45.063 14:33:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:45.063 14:33:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:45.063 14:33:43 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:45.063 14:33:43 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:45.063 14:33:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:45.063 14:33:43 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:45.063 14:33:43 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:45.063 14:33:43 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:45.063 14:33:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:45.063 14:33:43 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:45.063 14:33:43 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.063 1+0 records in 00:07:45.063 1+0 records out 00:07:45.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309518 s, 13.2 MB/s 00:07:45.063 14:33:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:45.063 14:33:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:45.063 14:33:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:45.063 14:33:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:45.063 14:33:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:45.063 14:33:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.063 14:33:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.063 14:33:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:45.322 /dev/nbd1 00:07:45.322 14:33:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:45.322 14:33:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:45.322 1+0 records in 00:07:45.322 1+0 records out 00:07:45.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341329 s, 12.0 MB/s 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:45.322 14:33:44 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:45.322 14:33:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:45.322 14:33:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:45.322 14:33:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.322 14:33:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.322 14:33:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.889 14:33:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:45.889 { 00:07:45.890 "nbd_device": "/dev/nbd0", 00:07:45.890 "bdev_name": "Malloc0" 00:07:45.890 }, 00:07:45.890 { 00:07:45.890 "nbd_device": "/dev/nbd1", 00:07:45.890 "bdev_name": "Malloc1" 00:07:45.890 } 00:07:45.890 ]' 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:45.890 { 00:07:45.890 "nbd_device": "/dev/nbd0", 00:07:45.890 "bdev_name": "Malloc0" 00:07:45.890 }, 00:07:45.890 { 00:07:45.890 "nbd_device": "/dev/nbd1", 00:07:45.890 "bdev_name": "Malloc1" 00:07:45.890 } 00:07:45.890 ]' 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:45.890 /dev/nbd1' 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:45.890 /dev/nbd1' 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:45.890 256+0 records in 00:07:45.890 256+0 records out 00:07:45.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00929061 s, 113 MB/s 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:45.890 256+0 records in 00:07:45.890 256+0 records out 00:07:45.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302053 s, 34.7 MB/s 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:45.890 256+0 records in 00:07:45.890 256+0 records out 00:07:45.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0358773 s, 29.2 MB/s 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.890 14:33:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.149 14:33:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.149 14:33:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.149 14:33:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.149 14:33:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.149 14:33:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.149 14:33:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.149 14:33:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.149 14:33:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.149 14:33:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.149 14:33:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.431 14:33:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.690 14:33:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:46.690 14:33:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:46.690 14:33:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:46.949 14:33:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:46.949 14:33:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:46.949 14:33:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:46.949 14:33:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:46.949 14:33:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:46.949 14:33:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:46.949 14:33:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:46.949 14:33:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:46.949 14:33:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:46.949 14:33:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:47.517 14:33:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:48.474 [2024-11-04 14:33:47.394475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:48.474 [2024-11-04 14:33:47.519052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.474 [2024-11-04 14:33:47.519057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.732 [2024-11-04 14:33:47.708572] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:48.732 [2024-11-04 14:33:47.708701] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:50.635 spdk_app_start Round 2 00:07:50.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.635 14:33:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:50.635 14:33:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:50.635 14:33:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58288 /var/tmp/spdk-nbd.sock 00:07:50.635 14:33:49 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58288 ']' 00:07:50.635 14:33:49 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.635 14:33:49 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.635 14:33:49 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.635 14:33:49 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.635 14:33:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.635 14:33:49 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.635 14:33:49 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:50.635 14:33:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:51.202 Malloc0 00:07:51.203 14:33:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:51.461 Malloc1 00:07:51.461 14:33:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.461 14:33:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:51.720 /dev/nbd0 00:07:51.720 14:33:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:51.720 14:33:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:51.720 1+0 records in 00:07:51.720 1+0 records out 00:07:51.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382452 s, 10.7 MB/s 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:51.720 14:33:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:51.720 14:33:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.720 14:33:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.720 14:33:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:51.978 /dev/nbd1 00:07:51.978 14:33:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:51.978 14:33:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:51.979 1+0 records in 00:07:51.979 1+0 records out 00:07:51.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311593 s, 13.1 MB/s 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:51.979 14:33:51 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:51.979 14:33:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.979 14:33:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.979 14:33:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.979 14:33:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.979 14:33:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:52.545 { 00:07:52.545 "nbd_device": "/dev/nbd0", 00:07:52.545 "bdev_name": "Malloc0" 00:07:52.545 }, 00:07:52.545 { 00:07:52.545 "nbd_device": "/dev/nbd1", 00:07:52.545 "bdev_name": "Malloc1" 00:07:52.545 } 00:07:52.545 ]' 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:52.545 { 00:07:52.545 "nbd_device": "/dev/nbd0", 00:07:52.545 "bdev_name": "Malloc0" 00:07:52.545 }, 00:07:52.545 { 00:07:52.545 "nbd_device": "/dev/nbd1", 00:07:52.545 "bdev_name": "Malloc1" 00:07:52.545 } 00:07:52.545 ]' 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:52.545 /dev/nbd1' 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:52.545 /dev/nbd1' 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:52.545 256+0 records in 00:07:52.545 256+0 records out 00:07:52.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00718174 s, 146 MB/s 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:52.545 256+0 records in 00:07:52.545 256+0 records out 00:07:52.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307056 s, 34.1 MB/s 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:52.545 256+0 records in 00:07:52.545 256+0 records out 00:07:52.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308665 s, 34.0 MB/s 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.545 14:33:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.802 14:33:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.802 14:33:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.802 14:33:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.802 14:33:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.802 14:33:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.802 14:33:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.802 14:33:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:52.802 14:33:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.802 14:33:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.802 14:33:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.061 14:33:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:53.319 14:33:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:53.319 14:33:52 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:53.885 14:33:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:54.820 [2024-11-04 14:33:53.877811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:55.079 [2024-11-04 14:33:54.007127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.079 [2024-11-04 14:33:54.007141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.079 [2024-11-04 14:33:54.200016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:55.079 [2024-11-04 14:33:54.200126] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:56.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:56.980 14:33:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58288 /var/tmp/spdk-nbd.sock 00:07:56.980 14:33:55 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58288 ']' 00:07:56.980 14:33:55 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:56.980 14:33:55 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:56.980 14:33:55 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:56.980 14:33:55 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:56.980 14:33:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:57.239 14:33:56 event.app_repeat -- event/event.sh@39 -- # killprocess 58288 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58288 ']' 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58288 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58288 00:07:57.239 killing process with pid 58288 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58288' 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58288 00:07:57.239 14:33:56 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58288 00:07:58.188 spdk_app_start is called in Round 0. 00:07:58.188 Shutdown signal received, stop current app iteration 00:07:58.188 Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 reinitialization... 00:07:58.188 spdk_app_start is called in Round 1. 00:07:58.188 Shutdown signal received, stop current app iteration 00:07:58.188 Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 reinitialization... 00:07:58.188 spdk_app_start is called in Round 2. 00:07:58.188 Shutdown signal received, stop current app iteration 00:07:58.188 Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 reinitialization... 00:07:58.188 spdk_app_start is called in Round 3. 00:07:58.188 Shutdown signal received, stop current app iteration 00:07:58.188 ************************************ 00:07:58.188 END TEST app_repeat 00:07:58.188 ************************************ 00:07:58.188 14:33:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:58.188 14:33:57 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:58.188 00:07:58.188 real 0m22.201s 00:07:58.188 user 0m49.573s 00:07:58.188 sys 0m3.108s 00:07:58.188 14:33:57 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.188 14:33:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:58.188 14:33:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:58.188 14:33:57 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:58.188 14:33:57 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.188 14:33:57 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.188 14:33:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:58.188 ************************************ 00:07:58.188 START TEST cpu_locks 00:07:58.188 ************************************ 00:07:58.188 14:33:57 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:58.188 * Looking for test storage... 00:07:58.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:58.188 14:33:57 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:58.188 14:33:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:58.188 14:33:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:58.448 14:33:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.448 14:33:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:58.448 14:33:57 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.448 14:33:57 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:58.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.448 --rc genhtml_branch_coverage=1 00:07:58.448 --rc genhtml_function_coverage=1 00:07:58.448 --rc genhtml_legend=1 00:07:58.448 --rc geninfo_all_blocks=1 00:07:58.448 --rc geninfo_unexecuted_blocks=1 00:07:58.448 00:07:58.448 ' 00:07:58.448 14:33:57 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:58.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.448 --rc genhtml_branch_coverage=1 00:07:58.448 --rc genhtml_function_coverage=1 00:07:58.448 --rc genhtml_legend=1 00:07:58.448 --rc geninfo_all_blocks=1 00:07:58.448 --rc geninfo_unexecuted_blocks=1 00:07:58.448 00:07:58.448 ' 00:07:58.448 14:33:57 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:58.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.448 --rc genhtml_branch_coverage=1 00:07:58.448 --rc genhtml_function_coverage=1 00:07:58.448 --rc genhtml_legend=1 00:07:58.448 --rc geninfo_all_blocks=1 00:07:58.448 --rc geninfo_unexecuted_blocks=1 00:07:58.448 00:07:58.448 ' 00:07:58.448 14:33:57 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:58.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.448 --rc genhtml_branch_coverage=1 00:07:58.448 --rc genhtml_function_coverage=1 00:07:58.448 --rc genhtml_legend=1 00:07:58.448 --rc geninfo_all_blocks=1 00:07:58.448 --rc geninfo_unexecuted_blocks=1 00:07:58.448 00:07:58.448 ' 00:07:58.448 14:33:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:58.448 14:33:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:58.448 14:33:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:58.448 14:33:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:58.448 14:33:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:58.448 14:33:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.448 14:33:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 ************************************ 00:07:58.448 START TEST default_locks 00:07:58.448 ************************************ 00:07:58.448 14:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:07:58.448 14:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58767 00:07:58.448 14:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:58.448 14:33:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58767 00:07:58.448 14:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58767 ']' 00:07:58.448 14:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.448 14:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:58.448 14:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.448 14:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:58.448 14:33:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 [2024-11-04 14:33:57.518082] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:07:58.448 [2024-11-04 14:33:57.518243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58767 ] 00:07:58.707 [2024-11-04 14:33:57.692200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.707 [2024-11-04 14:33:57.819075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.642 14:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.642 14:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:59.642 14:33:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58767 00:07:59.642 14:33:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58767 00:07:59.642 14:33:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:59.901 14:33:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58767 00:07:59.901 14:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58767 ']' 00:07:59.901 14:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58767 00:07:59.901 14:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:59.901 14:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:59.901 14:33:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58767 00:07:59.901 killing process with pid 58767 00:07:59.901 14:33:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:59.901 14:33:59 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:59.901 14:33:59 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58767' 00:07:59.901 14:33:59 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58767 00:07:59.901 14:33:59 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58767 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58767 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58767 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58767 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58767 ']' 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:02.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.459 ERROR: process (pid: 58767) is no longer running 00:08:02.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58767) - No such process 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:02.459 00:08:02.459 real 0m3.871s 00:08:02.459 user 0m3.938s 00:08:02.459 sys 0m0.655s 00:08:02.459 ************************************ 00:08:02.459 END TEST default_locks 00:08:02.459 ************************************ 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:02.459 14:34:01 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.459 14:34:01 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:02.459 14:34:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:02.459 14:34:01 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:02.459 14:34:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:02.459 ************************************ 00:08:02.459 START TEST default_locks_via_rpc 00:08:02.459 ************************************ 00:08:02.459 14:34:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:08:02.459 14:34:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58840 00:08:02.459 14:34:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58840 00:08:02.459 14:34:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58840 ']' 00:08:02.459 14:34:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:02.459 14:34:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.459 14:34:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:02.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.459 14:34:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.459 14:34:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:02.459 14:34:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.459 [2024-11-04 14:34:01.457400] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:02.459 [2024-11-04 14:34:01.457594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58840 ] 00:08:02.717 [2024-11-04 14:34:01.685061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.717 [2024-11-04 14:34:01.819680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58840 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:03.654 14:34:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58840 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58840 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58840 ']' 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58840 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58840 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:04.222 killing process with pid 58840 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58840' 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58840 00:08:04.222 14:34:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58840 00:08:06.755 00:08:06.755 real 0m4.167s 00:08:06.755 user 0m4.226s 00:08:06.755 sys 0m0.815s 00:08:06.755 ************************************ 00:08:06.755 END TEST default_locks_via_rpc 00:08:06.755 ************************************ 00:08:06.755 14:34:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.755 14:34:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.755 14:34:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:06.755 14:34:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:06.755 14:34:05 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.755 14:34:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:06.755 ************************************ 00:08:06.755 START TEST non_locking_app_on_locked_coremask 00:08:06.755 ************************************ 00:08:06.755 14:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:08:06.755 14:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58914 00:08:06.755 14:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:06.755 14:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58914 /var/tmp/spdk.sock 00:08:06.755 14:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58914 ']' 00:08:06.755 14:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.755 14:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.755 14:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.755 14:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.755 14:34:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:06.755 [2024-11-04 14:34:05.703604] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:06.755 [2024-11-04 14:34:05.703858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58914 ] 00:08:07.013 [2024-11-04 14:34:05.931138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.271 [2024-11-04 14:34:06.148826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58941 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58941 /var/tmp/spdk2.sock 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58941 ']' 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:08.207 14:34:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:08.207 [2024-11-04 14:34:07.181339] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:08.207 [2024-11-04 14:34:07.181697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58941 ] 00:08:08.465 [2024-11-04 14:34:07.376418] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:08.465 [2024-11-04 14:34:07.376520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.722 [2024-11-04 14:34:07.641086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.251 14:34:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:11.251 14:34:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:11.251 14:34:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58914 00:08:11.251 14:34:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58914 00:08:11.251 14:34:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58914 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58914 ']' 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58914 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58914 00:08:11.817 killing process with pid 58914 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58914' 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58914 00:08:11.817 14:34:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58914 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58941 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58941 ']' 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58941 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58941 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58941' 00:08:17.091 killing process with pid 58941 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58941 00:08:17.091 14:34:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58941 00:08:18.496 00:08:18.496 real 0m11.897s 00:08:18.496 user 0m12.492s 00:08:18.496 sys 0m1.458s 00:08:18.496 ************************************ 00:08:18.496 END TEST non_locking_app_on_locked_coremask 00:08:18.496 ************************************ 00:08:18.496 14:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:18.496 14:34:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.496 14:34:17 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:18.496 14:34:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:18.496 14:34:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:18.496 14:34:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.496 ************************************ 00:08:18.496 START TEST locking_app_on_unlocked_coremask 00:08:18.496 ************************************ 00:08:18.496 14:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:08:18.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.496 14:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59092 00:08:18.496 14:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59092 /var/tmp/spdk.sock 00:08:18.496 14:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:18.496 14:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59092 ']' 00:08:18.496 14:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.496 14:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:18.496 14:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.496 14:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:18.497 14:34:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.497 [2024-11-04 14:34:17.610270] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:18.497 [2024-11-04 14:34:17.610480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59092 ] 00:08:18.754 [2024-11-04 14:34:17.792339] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:18.754 [2024-11-04 14:34:17.792427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.013 [2024-11-04 14:34:17.922060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59108 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59108 /var/tmp/spdk2.sock 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59108 ']' 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:19.949 14:34:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:19.949 [2024-11-04 14:34:18.926840] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:19.949 [2024-11-04 14:34:18.927291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59108 ] 00:08:20.227 [2024-11-04 14:34:19.123405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.486 [2024-11-04 14:34:19.386001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.017 14:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.017 14:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:23.017 14:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59108 00:08:23.017 14:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59108 00:08:23.017 14:34:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59092 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59092 ']' 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59092 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59092 00:08:23.584 killing process with pid 59092 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59092' 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59092 00:08:23.584 14:34:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59092 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59108 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59108 ']' 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59108 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59108 00:08:28.850 killing process with pid 59108 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59108' 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59108 00:08:28.850 14:34:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59108 00:08:30.754 00:08:30.754 real 0m11.910s 00:08:30.754 user 0m12.442s 00:08:30.754 sys 0m1.529s 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.754 ************************************ 00:08:30.754 END TEST locking_app_on_unlocked_coremask 00:08:30.754 ************************************ 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:30.754 14:34:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:30.754 14:34:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:30.754 14:34:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.754 14:34:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:30.754 ************************************ 00:08:30.754 START TEST locking_app_on_locked_coremask 00:08:30.754 ************************************ 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:08:30.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59262 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59262 /var/tmp/spdk.sock 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59262 ']' 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:30.754 14:34:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:30.754 [2024-11-04 14:34:29.567656] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:30.754 [2024-11-04 14:34:29.567832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59262 ] 00:08:30.754 [2024-11-04 14:34:29.752280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.011 [2024-11-04 14:34:29.882159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59278 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59278 /var/tmp/spdk2.sock 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59278 /var/tmp/spdk2.sock 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:31.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59278 /var/tmp/spdk2.sock 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59278 ']' 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:31.946 14:34:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:31.946 [2024-11-04 14:34:30.880709] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:31.946 [2024-11-04 14:34:30.880856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59278 ] 00:08:32.205 [2024-11-04 14:34:31.085994] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59262 has claimed it. 00:08:32.205 [2024-11-04 14:34:31.086107] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:32.793 ERROR: process (pid: 59278) is no longer running 00:08:32.793 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59278) - No such process 00:08:32.793 14:34:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:32.793 14:34:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:32.793 14:34:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:32.793 14:34:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.793 14:34:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:32.793 14:34:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.793 14:34:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59262 00:08:32.793 14:34:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59262 00:08:32.793 14:34:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59262 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59262 ']' 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59262 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59262 00:08:33.055 killing process with pid 59262 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59262' 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59262 00:08:33.055 14:34:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59262 00:08:35.586 00:08:35.586 real 0m4.959s 00:08:35.586 user 0m5.351s 00:08:35.586 sys 0m0.921s 00:08:35.586 14:34:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.586 14:34:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:35.586 ************************************ 00:08:35.586 END TEST locking_app_on_locked_coremask 00:08:35.586 ************************************ 00:08:35.586 14:34:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:35.586 14:34:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:35.586 14:34:34 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.586 14:34:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:35.586 ************************************ 00:08:35.586 START TEST locking_overlapped_coremask 00:08:35.586 ************************************ 00:08:35.586 14:34:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:08:35.586 14:34:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59353 00:08:35.586 14:34:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59353 /var/tmp/spdk.sock 00:08:35.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.586 14:34:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59353 ']' 00:08:35.586 14:34:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.586 14:34:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:35.586 14:34:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:35.586 14:34:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.586 14:34:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:35.586 14:34:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:35.586 [2024-11-04 14:34:34.638259] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:35.586 [2024-11-04 14:34:34.638498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59353 ] 00:08:35.845 [2024-11-04 14:34:34.830374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.104 [2024-11-04 14:34:34.983528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.104 [2024-11-04 14:34:34.983626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.104 [2024-11-04 14:34:34.983640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59371 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59371 /var/tmp/spdk2.sock 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59371 /var/tmp/spdk2.sock 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59371 /var/tmp/spdk2.sock 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59371 ']' 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:37.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:37.041 14:34:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:37.041 [2024-11-04 14:34:36.009689] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:37.041 [2024-11-04 14:34:36.010681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59371 ] 00:08:37.301 [2024-11-04 14:34:36.215159] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59353 has claimed it. 00:08:37.301 [2024-11-04 14:34:36.215275] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:37.560 ERROR: process (pid: 59371) is no longer running 00:08:37.560 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59371) - No such process 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59353 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59353 ']' 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59353 00:08:37.560 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:08:37.818 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:37.818 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59353 00:08:37.818 killing process with pid 59353 00:08:37.818 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:37.818 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:37.818 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59353' 00:08:37.818 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59353 00:08:37.818 14:34:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59353 00:08:40.346 00:08:40.346 real 0m4.592s 00:08:40.346 user 0m12.454s 00:08:40.346 sys 0m0.743s 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:40.346 ************************************ 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:40.346 END TEST locking_overlapped_coremask 00:08:40.346 ************************************ 00:08:40.346 14:34:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:40.346 14:34:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:40.346 14:34:39 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:40.346 14:34:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:40.346 ************************************ 00:08:40.346 START TEST locking_overlapped_coremask_via_rpc 00:08:40.346 ************************************ 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59435 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59435 /var/tmp/spdk.sock 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59435 ']' 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:40.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:40.346 14:34:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.346 [2024-11-04 14:34:39.242097] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:40.347 [2024-11-04 14:34:39.242304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59435 ] 00:08:40.347 [2024-11-04 14:34:39.426447] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:40.347 [2024-11-04 14:34:39.426520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.605 [2024-11-04 14:34:39.563789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.605 [2024-11-04 14:34:39.563872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.605 [2024-11-04 14:34:39.563872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59464 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59464 /var/tmp/spdk2.sock 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59464 ']' 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:41.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:41.538 14:34:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:41.538 [2024-11-04 14:34:40.596122] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:41.538 [2024-11-04 14:34:40.596636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59464 ] 00:08:41.797 [2024-11-04 14:34:40.798533] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:41.797 [2024-11-04 14:34:40.798623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.055 [2024-11-04 14:34:41.077209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.055 [2024-11-04 14:34:41.077271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.055 [2024-11-04 14:34:41.077280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:44.631 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:44.631 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:44.631 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:44.631 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.631 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.631 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.632 [2024-11-04 14:34:43.374219] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59435 has claimed it. 00:08:44.632 request: 00:08:44.632 { 00:08:44.632 "method": "framework_enable_cpumask_locks", 00:08:44.632 "req_id": 1 00:08:44.632 } 00:08:44.632 Got JSON-RPC error response 00:08:44.632 response: 00:08:44.632 { 00:08:44.632 "code": -32603, 00:08:44.632 "message": "Failed to claim CPU core: 2" 00:08:44.632 } 00:08:44.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59435 /var/tmp/spdk.sock 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59435 ']' 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59464 /var/tmp/spdk2.sock 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59464 ']' 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:44.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:44.632 14:34:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.200 14:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:45.200 14:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:45.200 14:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:45.200 14:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:45.200 14:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:45.200 14:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:45.200 00:08:45.200 real 0m4.950s 00:08:45.200 user 0m1.890s 00:08:45.200 sys 0m0.244s 00:08:45.200 14:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.200 14:34:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:45.200 ************************************ 00:08:45.200 END TEST locking_overlapped_coremask_via_rpc 00:08:45.200 ************************************ 00:08:45.200 14:34:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:45.200 14:34:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59435 ]] 00:08:45.200 14:34:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59435 00:08:45.200 14:34:44 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59435 ']' 00:08:45.200 14:34:44 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59435 00:08:45.200 14:34:44 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:45.200 14:34:44 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:45.200 14:34:44 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59435 00:08:45.200 killing process with pid 59435 00:08:45.200 14:34:44 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:45.200 14:34:44 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:45.200 14:34:44 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59435' 00:08:45.200 14:34:44 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59435 00:08:45.200 14:34:44 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59435 00:08:47.730 14:34:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59464 ]] 00:08:47.730 14:34:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59464 00:08:47.730 14:34:46 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59464 ']' 00:08:47.730 14:34:46 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59464 00:08:47.730 14:34:46 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:47.730 14:34:46 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:47.730 14:34:46 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59464 00:08:47.730 killing process with pid 59464 00:08:47.730 14:34:46 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:47.730 14:34:46 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:47.730 14:34:46 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59464' 00:08:47.730 14:34:46 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59464 00:08:47.730 14:34:46 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59464 00:08:49.631 14:34:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:49.631 Process with pid 59435 is not found 00:08:49.631 Process with pid 59464 is not found 00:08:49.631 14:34:48 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:49.631 14:34:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59435 ]] 00:08:49.631 14:34:48 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59435 00:08:49.631 14:34:48 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59435 ']' 00:08:49.631 14:34:48 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59435 00:08:49.631 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59435) - No such process 00:08:49.631 14:34:48 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59435 is not found' 00:08:49.632 14:34:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59464 ]] 00:08:49.632 14:34:48 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59464 00:08:49.632 14:34:48 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59464 ']' 00:08:49.632 14:34:48 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59464 00:08:49.632 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59464) - No such process 00:08:49.632 14:34:48 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59464 is not found' 00:08:49.632 14:34:48 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:49.632 00:08:49.632 real 0m51.496s 00:08:49.632 user 1m29.572s 00:08:49.632 sys 0m7.578s 00:08:49.632 14:34:48 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.632 14:34:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:49.632 ************************************ 00:08:49.632 END TEST cpu_locks 00:08:49.632 ************************************ 00:08:49.632 00:08:49.632 real 1m24.969s 00:08:49.632 user 2m37.218s 00:08:49.632 sys 0m11.832s 00:08:49.632 14:34:48 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:49.632 14:34:48 event -- common/autotest_common.sh@10 -- # set +x 00:08:49.632 ************************************ 00:08:49.891 END TEST event 00:08:49.891 ************************************ 00:08:49.891 14:34:48 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:49.891 14:34:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:49.891 14:34:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.891 14:34:48 -- common/autotest_common.sh@10 -- # set +x 00:08:49.891 ************************************ 00:08:49.891 START TEST thread 00:08:49.891 ************************************ 00:08:49.891 14:34:48 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:49.891 * Looking for test storage... 00:08:49.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:49.891 14:34:48 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:49.891 14:34:48 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:49.891 14:34:48 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:49.891 14:34:48 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:49.891 14:34:48 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.891 14:34:48 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.891 14:34:48 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.892 14:34:48 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.892 14:34:48 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.892 14:34:48 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.892 14:34:48 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.892 14:34:48 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.892 14:34:48 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.892 14:34:48 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.892 14:34:48 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.892 14:34:48 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:49.892 14:34:48 thread -- scripts/common.sh@345 -- # : 1 00:08:49.892 14:34:48 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.892 14:34:48 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.892 14:34:48 thread -- scripts/common.sh@365 -- # decimal 1 00:08:49.892 14:34:48 thread -- scripts/common.sh@353 -- # local d=1 00:08:49.892 14:34:48 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.892 14:34:48 thread -- scripts/common.sh@355 -- # echo 1 00:08:49.892 14:34:48 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.892 14:34:48 thread -- scripts/common.sh@366 -- # decimal 2 00:08:49.892 14:34:48 thread -- scripts/common.sh@353 -- # local d=2 00:08:49.892 14:34:48 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.892 14:34:48 thread -- scripts/common.sh@355 -- # echo 2 00:08:49.892 14:34:48 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.892 14:34:48 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.892 14:34:48 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.892 14:34:48 thread -- scripts/common.sh@368 -- # return 0 00:08:49.892 14:34:48 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.892 14:34:48 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.892 --rc genhtml_branch_coverage=1 00:08:49.892 --rc genhtml_function_coverage=1 00:08:49.892 --rc genhtml_legend=1 00:08:49.892 --rc geninfo_all_blocks=1 00:08:49.892 --rc geninfo_unexecuted_blocks=1 00:08:49.892 00:08:49.892 ' 00:08:49.892 14:34:48 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.892 --rc genhtml_branch_coverage=1 00:08:49.892 --rc genhtml_function_coverage=1 00:08:49.892 --rc genhtml_legend=1 00:08:49.892 --rc geninfo_all_blocks=1 00:08:49.892 --rc geninfo_unexecuted_blocks=1 00:08:49.892 00:08:49.892 ' 00:08:49.892 14:34:48 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.892 --rc genhtml_branch_coverage=1 00:08:49.892 --rc genhtml_function_coverage=1 00:08:49.892 --rc genhtml_legend=1 00:08:49.892 --rc geninfo_all_blocks=1 00:08:49.892 --rc geninfo_unexecuted_blocks=1 00:08:49.892 00:08:49.892 ' 00:08:49.892 14:34:48 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:49.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.892 --rc genhtml_branch_coverage=1 00:08:49.892 --rc genhtml_function_coverage=1 00:08:49.892 --rc genhtml_legend=1 00:08:49.892 --rc geninfo_all_blocks=1 00:08:49.892 --rc geninfo_unexecuted_blocks=1 00:08:49.892 00:08:49.892 ' 00:08:49.892 14:34:48 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:49.892 14:34:48 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:49.892 14:34:48 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:49.892 14:34:48 thread -- common/autotest_common.sh@10 -- # set +x 00:08:49.892 ************************************ 00:08:49.892 START TEST thread_poller_perf 00:08:49.892 ************************************ 00:08:49.892 14:34:49 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:50.151 [2024-11-04 14:34:49.042414] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:50.151 [2024-11-04 14:34:49.042720] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59661 ] 00:08:50.151 [2024-11-04 14:34:49.222919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.411 [2024-11-04 14:34:49.382549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.411 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:51.799 [2024-11-04T14:34:50.922Z] ====================================== 00:08:51.799 [2024-11-04T14:34:50.922Z] busy:2215679658 (cyc) 00:08:51.799 [2024-11-04T14:34:50.922Z] total_run_count: 297000 00:08:51.799 [2024-11-04T14:34:50.922Z] tsc_hz: 2200000000 (cyc) 00:08:51.799 [2024-11-04T14:34:50.922Z] ====================================== 00:08:51.799 [2024-11-04T14:34:50.922Z] poller_cost: 7460 (cyc), 3390 (nsec) 00:08:51.799 00:08:51.799 real 0m1.616s 00:08:51.799 user 0m1.410s 00:08:51.799 sys 0m0.097s 00:08:51.799 14:34:50 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.799 14:34:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:51.799 ************************************ 00:08:51.799 END TEST thread_poller_perf 00:08:51.799 ************************************ 00:08:51.799 14:34:50 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:51.799 14:34:50 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:51.799 14:34:50 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.799 14:34:50 thread -- common/autotest_common.sh@10 -- # set +x 00:08:51.799 ************************************ 00:08:51.799 START TEST thread_poller_perf 00:08:51.799 ************************************ 00:08:51.799 14:34:50 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:51.799 [2024-11-04 14:34:50.707615] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:51.799 [2024-11-04 14:34:50.708073] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59698 ] 00:08:51.799 [2024-11-04 14:34:50.897453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.058 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:52.058 [2024-11-04 14:34:51.060355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.433 [2024-11-04T14:34:52.556Z] ====================================== 00:08:53.433 [2024-11-04T14:34:52.556Z] busy:2204616317 (cyc) 00:08:53.433 [2024-11-04T14:34:52.556Z] total_run_count: 3398000 00:08:53.433 [2024-11-04T14:34:52.556Z] tsc_hz: 2200000000 (cyc) 00:08:53.433 [2024-11-04T14:34:52.556Z] ====================================== 00:08:53.433 [2024-11-04T14:34:52.556Z] poller_cost: 648 (cyc), 294 (nsec) 00:08:53.433 ************************************ 00:08:53.434 END TEST thread_poller_perf 00:08:53.434 ************************************ 00:08:53.434 00:08:53.434 real 0m1.649s 00:08:53.434 user 0m1.433s 00:08:53.434 sys 0m0.105s 00:08:53.434 14:34:52 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.434 14:34:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:53.434 14:34:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:53.434 00:08:53.434 real 0m3.548s 00:08:53.434 user 0m2.994s 00:08:53.434 sys 0m0.334s 00:08:53.434 ************************************ 00:08:53.434 END TEST thread 00:08:53.434 ************************************ 00:08:53.434 14:34:52 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.434 14:34:52 thread -- common/autotest_common.sh@10 -- # set +x 00:08:53.434 14:34:52 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:53.434 14:34:52 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:53.434 14:34:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:53.434 14:34:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.434 14:34:52 -- common/autotest_common.sh@10 -- # set +x 00:08:53.434 ************************************ 00:08:53.434 START TEST app_cmdline 00:08:53.434 ************************************ 00:08:53.434 14:34:52 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:53.434 * Looking for test storage... 00:08:53.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:53.434 14:34:52 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:53.434 14:34:52 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:53.434 14:34:52 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.693 14:34:52 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:53.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.693 --rc genhtml_branch_coverage=1 00:08:53.693 --rc genhtml_function_coverage=1 00:08:53.693 --rc genhtml_legend=1 00:08:53.693 --rc geninfo_all_blocks=1 00:08:53.693 --rc geninfo_unexecuted_blocks=1 00:08:53.693 00:08:53.693 ' 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:53.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.693 --rc genhtml_branch_coverage=1 00:08:53.693 --rc genhtml_function_coverage=1 00:08:53.693 --rc genhtml_legend=1 00:08:53.693 --rc geninfo_all_blocks=1 00:08:53.693 --rc geninfo_unexecuted_blocks=1 00:08:53.693 00:08:53.693 ' 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:53.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.693 --rc genhtml_branch_coverage=1 00:08:53.693 --rc genhtml_function_coverage=1 00:08:53.693 --rc genhtml_legend=1 00:08:53.693 --rc geninfo_all_blocks=1 00:08:53.693 --rc geninfo_unexecuted_blocks=1 00:08:53.693 00:08:53.693 ' 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:53.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.693 --rc genhtml_branch_coverage=1 00:08:53.693 --rc genhtml_function_coverage=1 00:08:53.693 --rc genhtml_legend=1 00:08:53.693 --rc geninfo_all_blocks=1 00:08:53.693 --rc geninfo_unexecuted_blocks=1 00:08:53.693 00:08:53.693 ' 00:08:53.693 14:34:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:53.693 14:34:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59781 00:08:53.693 14:34:52 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:53.693 14:34:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59781 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59781 ']' 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.693 14:34:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:53.693 [2024-11-04 14:34:52.785063] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:53.693 [2024-11-04 14:34:52.785837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59781 ] 00:08:53.951 [2024-11-04 14:34:53.010215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.210 [2024-11-04 14:34:53.205458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.145 14:34:54 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:55.145 14:34:54 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:55.145 14:34:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:55.404 { 00:08:55.404 "version": "SPDK v25.01-pre git sha1 78b0a6b78", 00:08:55.404 "fields": { 00:08:55.404 "major": 25, 00:08:55.404 "minor": 1, 00:08:55.404 "patch": 0, 00:08:55.404 "suffix": "-pre", 00:08:55.404 "commit": "78b0a6b78" 00:08:55.404 } 00:08:55.404 } 00:08:55.404 14:34:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:55.404 14:34:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:55.404 14:34:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:55.404 14:34:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:55.404 14:34:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:55.404 14:34:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:55.404 14:34:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:55.404 14:34:54 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.404 14:34:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:55.404 14:34:54 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.663 14:34:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:55.663 14:34:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:55.663 14:34:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:55.663 14:34:54 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:55.921 request: 00:08:55.921 { 00:08:55.921 "method": "env_dpdk_get_mem_stats", 00:08:55.921 "req_id": 1 00:08:55.921 } 00:08:55.921 Got JSON-RPC error response 00:08:55.921 response: 00:08:55.921 { 00:08:55.921 "code": -32601, 00:08:55.921 "message": "Method not found" 00:08:55.921 } 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.921 14:34:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59781 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59781 ']' 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59781 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59781 00:08:55.921 killing process with pid 59781 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59781' 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@971 -- # kill 59781 00:08:55.921 14:34:54 app_cmdline -- common/autotest_common.sh@976 -- # wait 59781 00:08:58.452 ************************************ 00:08:58.452 END TEST app_cmdline 00:08:58.452 ************************************ 00:08:58.452 00:08:58.452 real 0m4.674s 00:08:58.452 user 0m5.226s 00:08:58.452 sys 0m0.720s 00:08:58.453 14:34:57 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:58.453 14:34:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:58.453 14:34:57 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:58.453 14:34:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:58.453 14:34:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.453 14:34:57 -- common/autotest_common.sh@10 -- # set +x 00:08:58.453 ************************************ 00:08:58.453 START TEST version 00:08:58.453 ************************************ 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:58.453 * Looking for test storage... 00:08:58.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:58.453 14:34:57 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.453 14:34:57 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.453 14:34:57 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.453 14:34:57 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.453 14:34:57 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.453 14:34:57 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.453 14:34:57 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.453 14:34:57 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.453 14:34:57 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.453 14:34:57 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.453 14:34:57 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.453 14:34:57 version -- scripts/common.sh@344 -- # case "$op" in 00:08:58.453 14:34:57 version -- scripts/common.sh@345 -- # : 1 00:08:58.453 14:34:57 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.453 14:34:57 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.453 14:34:57 version -- scripts/common.sh@365 -- # decimal 1 00:08:58.453 14:34:57 version -- scripts/common.sh@353 -- # local d=1 00:08:58.453 14:34:57 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.453 14:34:57 version -- scripts/common.sh@355 -- # echo 1 00:08:58.453 14:34:57 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.453 14:34:57 version -- scripts/common.sh@366 -- # decimal 2 00:08:58.453 14:34:57 version -- scripts/common.sh@353 -- # local d=2 00:08:58.453 14:34:57 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.453 14:34:57 version -- scripts/common.sh@355 -- # echo 2 00:08:58.453 14:34:57 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.453 14:34:57 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.453 14:34:57 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.453 14:34:57 version -- scripts/common.sh@368 -- # return 0 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:58.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.453 --rc genhtml_branch_coverage=1 00:08:58.453 --rc genhtml_function_coverage=1 00:08:58.453 --rc genhtml_legend=1 00:08:58.453 --rc geninfo_all_blocks=1 00:08:58.453 --rc geninfo_unexecuted_blocks=1 00:08:58.453 00:08:58.453 ' 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:58.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.453 --rc genhtml_branch_coverage=1 00:08:58.453 --rc genhtml_function_coverage=1 00:08:58.453 --rc genhtml_legend=1 00:08:58.453 --rc geninfo_all_blocks=1 00:08:58.453 --rc geninfo_unexecuted_blocks=1 00:08:58.453 00:08:58.453 ' 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:58.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.453 --rc genhtml_branch_coverage=1 00:08:58.453 --rc genhtml_function_coverage=1 00:08:58.453 --rc genhtml_legend=1 00:08:58.453 --rc geninfo_all_blocks=1 00:08:58.453 --rc geninfo_unexecuted_blocks=1 00:08:58.453 00:08:58.453 ' 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:58.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.453 --rc genhtml_branch_coverage=1 00:08:58.453 --rc genhtml_function_coverage=1 00:08:58.453 --rc genhtml_legend=1 00:08:58.453 --rc geninfo_all_blocks=1 00:08:58.453 --rc geninfo_unexecuted_blocks=1 00:08:58.453 00:08:58.453 ' 00:08:58.453 14:34:57 version -- app/version.sh@17 -- # get_header_version major 00:08:58.453 14:34:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:58.453 14:34:57 version -- app/version.sh@14 -- # cut -f2 00:08:58.453 14:34:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:58.453 14:34:57 version -- app/version.sh@17 -- # major=25 00:08:58.453 14:34:57 version -- app/version.sh@18 -- # get_header_version minor 00:08:58.453 14:34:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:58.453 14:34:57 version -- app/version.sh@14 -- # cut -f2 00:08:58.453 14:34:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:58.453 14:34:57 version -- app/version.sh@18 -- # minor=1 00:08:58.453 14:34:57 version -- app/version.sh@19 -- # get_header_version patch 00:08:58.453 14:34:57 version -- app/version.sh@14 -- # cut -f2 00:08:58.453 14:34:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:58.453 14:34:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:58.453 14:34:57 version -- app/version.sh@19 -- # patch=0 00:08:58.453 14:34:57 version -- app/version.sh@20 -- # get_header_version suffix 00:08:58.453 14:34:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:58.453 14:34:57 version -- app/version.sh@14 -- # tr -d '"' 00:08:58.453 14:34:57 version -- app/version.sh@14 -- # cut -f2 00:08:58.453 14:34:57 version -- app/version.sh@20 -- # suffix=-pre 00:08:58.453 14:34:57 version -- app/version.sh@22 -- # version=25.1 00:08:58.453 14:34:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:58.453 14:34:57 version -- app/version.sh@28 -- # version=25.1rc0 00:08:58.453 14:34:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:58.453 14:34:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:58.453 14:34:57 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:58.453 14:34:57 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:58.453 00:08:58.453 real 0m0.254s 00:08:58.453 user 0m0.167s 00:08:58.453 sys 0m0.123s 00:08:58.453 14:34:57 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:58.453 ************************************ 00:08:58.453 END TEST version 00:08:58.453 ************************************ 00:08:58.453 14:34:57 version -- common/autotest_common.sh@10 -- # set +x 00:08:58.453 14:34:57 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:58.453 14:34:57 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:58.453 14:34:57 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:58.453 14:34:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:58.453 14:34:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.453 14:34:57 -- common/autotest_common.sh@10 -- # set +x 00:08:58.453 ************************************ 00:08:58.453 START TEST bdev_raid 00:08:58.453 ************************************ 00:08:58.453 14:34:57 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:58.453 * Looking for test storage... 00:08:58.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:58.453 14:34:57 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:58.453 14:34:57 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:08:58.453 14:34:57 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:58.713 14:34:57 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.713 14:34:57 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:58.713 14:34:57 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.713 14:34:57 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:58.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.713 --rc genhtml_branch_coverage=1 00:08:58.713 --rc genhtml_function_coverage=1 00:08:58.713 --rc genhtml_legend=1 00:08:58.713 --rc geninfo_all_blocks=1 00:08:58.713 --rc geninfo_unexecuted_blocks=1 00:08:58.713 00:08:58.713 ' 00:08:58.713 14:34:57 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:58.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.713 --rc genhtml_branch_coverage=1 00:08:58.713 --rc genhtml_function_coverage=1 00:08:58.713 --rc genhtml_legend=1 00:08:58.713 --rc geninfo_all_blocks=1 00:08:58.713 --rc geninfo_unexecuted_blocks=1 00:08:58.713 00:08:58.713 ' 00:08:58.713 14:34:57 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:58.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.713 --rc genhtml_branch_coverage=1 00:08:58.713 --rc genhtml_function_coverage=1 00:08:58.713 --rc genhtml_legend=1 00:08:58.713 --rc geninfo_all_blocks=1 00:08:58.713 --rc geninfo_unexecuted_blocks=1 00:08:58.713 00:08:58.713 ' 00:08:58.713 14:34:57 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:58.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.713 --rc genhtml_branch_coverage=1 00:08:58.713 --rc genhtml_function_coverage=1 00:08:58.713 --rc genhtml_legend=1 00:08:58.713 --rc geninfo_all_blocks=1 00:08:58.713 --rc geninfo_unexecuted_blocks=1 00:08:58.713 00:08:58.713 ' 00:08:58.713 14:34:57 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:58.713 14:34:57 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:58.713 14:34:57 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:58.713 14:34:57 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:58.713 14:34:57 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:58.713 14:34:57 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:58.713 14:34:57 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:58.713 14:34:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:58.713 14:34:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.713 14:34:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:58.713 ************************************ 00:08:58.713 START TEST raid1_resize_data_offset_test 00:08:58.713 ************************************ 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:08:58.713 Process raid pid: 59974 00:08:58.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59974 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59974' 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59974 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 59974 ']' 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:58.713 14:34:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.713 [2024-11-04 14:34:57.734634] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:08:58.713 [2024-11-04 14:34:57.735003] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.972 [2024-11-04 14:34:57.925233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.231 [2024-11-04 14:34:58.096295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.231 [2024-11-04 14:34:58.299602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.231 [2024-11-04 14:34:58.299778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:59.820 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.821 malloc0 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.821 malloc1 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.821 null0 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.821 [2024-11-04 14:34:58.860521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:59.821 [2024-11-04 14:34:58.862844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:59.821 [2024-11-04 14:34:58.862914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:59.821 [2024-11-04 14:34:58.863148] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:59.821 [2024-11-04 14:34:58.863171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:59.821 [2024-11-04 14:34:58.863487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:59.821 [2024-11-04 14:34:58.863716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:59.821 [2024-11-04 14:34:58.863738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:59.821 [2024-11-04 14:34:58.863970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.821 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.098 [2024-11-04 14:34:58.928524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:09:00.098 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.098 14:34:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:09:00.098 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.098 14:34:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.356 malloc2 00:09:00.356 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.356 14:34:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:09:00.356 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.356 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.356 [2024-11-04 14:34:59.470568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:00.615 [2024-11-04 14:34:59.487426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:00.615 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.615 [2024-11-04 14:34:59.489775] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:09:00.615 14:34:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59974 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 59974 ']' 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 59974 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59974 00:09:00.616 killing process with pid 59974 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59974' 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 59974 00:09:00.616 14:34:59 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 59974 00:09:00.616 [2024-11-04 14:34:59.574558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.616 [2024-11-04 14:34:59.574911] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:09:00.616 [2024-11-04 14:34:59.575004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.616 [2024-11-04 14:34:59.575031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:09:00.616 [2024-11-04 14:34:59.604621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.616 [2024-11-04 14:34:59.605051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.616 [2024-11-04 14:34:59.605077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:02.519 [2024-11-04 14:35:01.206456] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.456 14:35:02 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:09:03.456 00:09:03.456 real 0m4.613s 00:09:03.456 user 0m4.596s 00:09:03.456 sys 0m0.590s 00:09:03.456 14:35:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:03.456 14:35:02 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.456 ************************************ 00:09:03.456 END TEST raid1_resize_data_offset_test 00:09:03.456 ************************************ 00:09:03.456 14:35:02 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:09:03.456 14:35:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:03.456 14:35:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:03.456 14:35:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.456 ************************************ 00:09:03.456 START TEST raid0_resize_superblock_test 00:09:03.456 ************************************ 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:09:03.456 Process raid pid: 60058 00:09:03.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60058 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60058' 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60058 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60058 ']' 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:03.456 14:35:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.456 [2024-11-04 14:35:02.393132] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:03.456 [2024-11-04 14:35:02.393566] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.715 [2024-11-04 14:35:02.578850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.715 [2024-11-04 14:35:02.709210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.974 [2024-11-04 14:35:02.917621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.974 [2024-11-04 14:35:02.917830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.541 14:35:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:04.541 14:35:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:04.541 14:35:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:04.541 14:35:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.541 14:35:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.799 malloc0 00:09:04.799 14:35:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.799 14:35:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:04.799 14:35:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.799 14:35:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.058 [2024-11-04 14:35:03.921407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:05.058 [2024-11-04 14:35:03.921694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.058 [2024-11-04 14:35:03.921752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:05.058 [2024-11-04 14:35:03.921774] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.058 [2024-11-04 14:35:03.924859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.058 [2024-11-04 14:35:03.925074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:05.058 pt0 00:09:05.058 14:35:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.058 14:35:03 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:05.058 14:35:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.058 14:35:03 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.058 6d057627-70ef-4703-9954-fa8500623092 00:09:05.058 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.058 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:05.058 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.058 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.058 8a9cb883-0d14-43a6-98b8-ef38fa73edc6 00:09:05.058 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.058 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:05.058 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.058 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.058 87097cad-4662-43c5-a55b-9947ec0ce4ed 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.059 [2024-11-04 14:35:04.067646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8a9cb883-0d14-43a6-98b8-ef38fa73edc6 is claimed 00:09:05.059 [2024-11-04 14:35:04.067765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 87097cad-4662-43c5-a55b-9947ec0ce4ed is claimed 00:09:05.059 [2024-11-04 14:35:04.067984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:05.059 [2024-11-04 14:35:04.068018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:09:05.059 [2024-11-04 14:35:04.068346] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:05.059 [2024-11-04 14:35:04.068619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:05.059 [2024-11-04 14:35:04.068637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:05.059 [2024-11-04 14:35:04.068830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.059 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.318 [2024-11-04 14:35:04.187980] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.318 [2024-11-04 14:35:04.235901] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:05.318 [2024-11-04 14:35:04.235934] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8a9cb883-0d14-43a6-98b8-ef38fa73edc6' was resized: old size 131072, new size 204800 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.318 [2024-11-04 14:35:04.243769] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:05.318 [2024-11-04 14:35:04.243795] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '87097cad-4662-43c5-a55b-9947ec0ce4ed' was resized: old size 131072, new size 204800 00:09:05.318 [2024-11-04 14:35:04.243850] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.318 [2024-11-04 14:35:04.359973] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:05.318 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.319 [2024-11-04 14:35:04.411700] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:05.319 [2024-11-04 14:35:04.411791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:05.319 [2024-11-04 14:35:04.411811] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.319 [2024-11-04 14:35:04.411834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:05.319 [2024-11-04 14:35:04.411987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.319 [2024-11-04 14:35:04.412048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.319 [2024-11-04 14:35:04.412067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.319 [2024-11-04 14:35:04.419635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:05.319 [2024-11-04 14:35:04.419830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.319 [2024-11-04 14:35:04.419868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:05.319 [2024-11-04 14:35:04.419887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.319 [2024-11-04 14:35:04.422896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.319 [2024-11-04 14:35:04.423093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:05.319 pt0 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.319 [2024-11-04 14:35:04.425353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8a9cb883-0d14-43a6-98b8-ef38fa73edc6 00:09:05.319 [2024-11-04 14:35:04.425435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8a9cb883-0d14-43a6-98b8-ef38fa73edc6 is claimed 00:09:05.319 [2024-11-04 14:35:04.425578] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 87097cad-4662-43c5-a55b-9947ec0ce4ed 00:09:05.319 [2024-11-04 14:35:04.425613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 87097cad-4662-43c5-a55b-9947ec0ce4ed is claimed 00:09:05.319 [2024-11-04 14:35:04.425769] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 87097cad-4662-43c5-a55b-9947ec0ce4ed (2) smaller than existing raid bdev Raid (3) 00:09:05.319 [2024-11-04 14:35:04.425811] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 8a9cb883-0d14-43a6-98b8-ef38fa73edc6: File exists 00:09:05.319 [2024-11-04 14:35:04.425868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:05.319 [2024-11-04 14:35:04.425887] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:09:05.319 [2024-11-04 14:35:04.426231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:05.319 [2024-11-04 14:35:04.426558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:05.319 [2024-11-04 14:35:04.426580] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:05.319 [2024-11-04 14:35:04.426773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:05.319 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:09:05.577 [2024-11-04 14:35:04.440000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60058 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60058 ']' 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60058 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60058 00:09:05.577 killing process with pid 60058 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60058' 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60058 00:09:05.577 [2024-11-04 14:35:04.522084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.577 [2024-11-04 14:35:04.522178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.577 14:35:04 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60058 00:09:05.577 [2024-11-04 14:35:04.522241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.577 [2024-11-04 14:35:04.522256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:06.951 [2024-11-04 14:35:05.825737] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:07.885 ************************************ 00:09:07.885 END TEST raid0_resize_superblock_test 00:09:07.885 ************************************ 00:09:07.885 14:35:06 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:07.885 00:09:07.885 real 0m4.564s 00:09:07.885 user 0m4.902s 00:09:07.885 sys 0m0.614s 00:09:07.885 14:35:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:07.885 14:35:06 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 14:35:06 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:09:07.885 14:35:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:07.885 14:35:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:07.885 14:35:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 ************************************ 00:09:07.885 START TEST raid1_resize_superblock_test 00:09:07.885 ************************************ 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:09:07.885 Process raid pid: 60156 00:09:07.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60156 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60156' 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60156 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60156 ']' 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:07.885 14:35:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.885 [2024-11-04 14:35:07.004279] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:07.885 [2024-11-04 14:35:07.004450] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.143 [2024-11-04 14:35:07.194524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.401 [2024-11-04 14:35:07.332081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.659 [2024-11-04 14:35:07.548547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.659 [2024-11-04 14:35:07.548590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.917 14:35:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:08.917 14:35:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:08.917 14:35:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:09:08.917 14:35:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.917 14:35:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.484 malloc0 00:09:09.484 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.484 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:09.484 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.484 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.484 [2024-11-04 14:35:08.565176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:09.484 [2024-11-04 14:35:08.565396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.484 [2024-11-04 14:35:08.565436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:09.484 [2024-11-04 14:35:08.565457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.484 [2024-11-04 14:35:08.568218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.484 [2024-11-04 14:35:08.568268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:09.484 pt0 00:09:09.484 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.484 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:09:09.484 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.484 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.743 f6587856-122f-4560-b1a6-8cbc16ab26ac 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.743 ab3839bc-2ceb-44b1-9721-d625b2aaae99 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.743 5aecbc81-fea7-41e8-81c2-1b4eb015ecf8 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.743 [2024-11-04 14:35:08.709259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ab3839bc-2ceb-44b1-9721-d625b2aaae99 is claimed 00:09:09.743 [2024-11-04 14:35:08.709371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5aecbc81-fea7-41e8-81c2-1b4eb015ecf8 is claimed 00:09:09.743 [2024-11-04 14:35:08.709559] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:09.743 [2024-11-04 14:35:08.709585] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:09:09.743 [2024-11-04 14:35:08.709920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:09.743 [2024-11-04 14:35:08.710208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:09.743 [2024-11-04 14:35:08.710232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:09.743 [2024-11-04 14:35:08.710424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.743 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.744 [2024-11-04 14:35:08.841549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.744 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.003 [2024-11-04 14:35:08.885526] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:10.003 [2024-11-04 14:35:08.885558] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ab3839bc-2ceb-44b1-9721-d625b2aaae99' was resized: old size 131072, new size 204800 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.003 [2024-11-04 14:35:08.893431] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:10.003 [2024-11-04 14:35:08.893459] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '5aecbc81-fea7-41e8-81c2-1b4eb015ecf8' was resized: old size 131072, new size 204800 00:09:10.003 [2024-11-04 14:35:08.893498] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:09:10.003 14:35:08 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.003 [2024-11-04 14:35:09.017588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.003 [2024-11-04 14:35:09.069336] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:09:10.003 [2024-11-04 14:35:09.069431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:09:10.003 [2024-11-04 14:35:09.069469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:09:10.003 [2024-11-04 14:35:09.069651] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.003 [2024-11-04 14:35:09.069897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.003 [2024-11-04 14:35:09.070021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.003 [2024-11-04 14:35:09.070045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.003 [2024-11-04 14:35:09.077271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:09:10.003 [2024-11-04 14:35:09.077338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.003 [2024-11-04 14:35:09.077367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:10.003 [2024-11-04 14:35:09.077386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.003 [2024-11-04 14:35:09.080249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.003 [2024-11-04 14:35:09.080314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:09:10.003 pt0 00:09:10.003 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:09:10.004 [2024-11-04 14:35:09.082556] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ab3839bc-2ceb-44b1-9721-d625b2aaae99 00:09:10.004 [2024-11-04 14:35:09.082635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ab3839bc-2ceb-44b1-9721-d625b2aaae99 is claimed 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.004 [2024-11-04 14:35:09.082776] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 5aecbc81-fea7-41e8-81c2-1b4eb015ecf8 00:09:10.004 [2024-11-04 14:35:09.082813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 5aecbc81-fea7-41e8-81c2-1b4eb015ecf8 is claimed 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.004 [2024-11-04 14:35:09.082985] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 5aecbc81-fea7-41e8-81c2-1b4eb015ecf8 (2) smaller than existing raid bdev Raid (3) 00:09:10.004 [2024-11-04 14:35:09.083017] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev ab3839bc-2ceb-44b1-9721-d625b2aaae99: File exists 00:09:10.004 [2024-11-04 14:35:09.083075] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:10.004 [2024-11-04 14:35:09.083094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:10.004 [2024-11-04 14:35:09.083398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:10.004 [2024-11-04 14:35:09.083606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:10.004 [2024-11-04 14:35:09.083621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:09:10.004 [2024-11-04 14:35:09.083801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:09:10.004 [2024-11-04 14:35:09.097586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.004 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60156 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60156 ']' 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60156 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60156 00:09:10.262 killing process with pid 60156 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60156' 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60156 00:09:10.262 [2024-11-04 14:35:09.180043] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.262 [2024-11-04 14:35:09.180110] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.262 14:35:09 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60156 00:09:10.262 [2024-11-04 14:35:09.180169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.262 [2024-11-04 14:35:09.180183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:09:11.636 [2024-11-04 14:35:10.491302] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.573 14:35:11 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:09:12.573 00:09:12.573 real 0m4.644s 00:09:12.573 user 0m4.964s 00:09:12.573 sys 0m0.629s 00:09:12.573 ************************************ 00:09:12.573 END TEST raid1_resize_superblock_test 00:09:12.573 ************************************ 00:09:12.573 14:35:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:12.573 14:35:11 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.573 14:35:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:09:12.573 14:35:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:09:12.573 14:35:11 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:09:12.573 14:35:11 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:09:12.573 14:35:11 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:09:12.573 14:35:11 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:12.573 14:35:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:12.573 14:35:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:12.573 14:35:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.573 ************************************ 00:09:12.573 START TEST raid_function_test_raid0 00:09:12.573 ************************************ 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:12.573 Process raid pid: 60259 00:09:12.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60259 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60259' 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60259 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60259 ']' 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:12.573 14:35:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:12.831 [2024-11-04 14:35:11.711025] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:12.831 [2024-11-04 14:35:11.711440] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.831 [2024-11-04 14:35:11.897321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.100 [2024-11-04 14:35:12.024444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.358 [2024-11-04 14:35:12.230252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.359 [2024-11-04 14:35:12.230297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.617 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:13.617 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:09:13.617 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:13.617 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.617 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:13.877 Base_1 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:13.877 Base_2 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:13.877 [2024-11-04 14:35:12.824920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:13.877 [2024-11-04 14:35:12.827350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:13.877 [2024-11-04 14:35:12.827458] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:13.877 [2024-11-04 14:35:12.827479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:13.877 [2024-11-04 14:35:12.827801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:13.877 [2024-11-04 14:35:12.828040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:13.877 [2024-11-04 14:35:12.828058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:13.877 [2024-11-04 14:35:12.828237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:13.877 14:35:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:14.136 [2024-11-04 14:35:13.117054] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:14.136 /dev/nbd0 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.136 1+0 records in 00:09:14.136 1+0 records out 00:09:14.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301184 s, 13.6 MB/s 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:14.136 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:14.395 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:14.395 { 00:09:14.395 "nbd_device": "/dev/nbd0", 00:09:14.395 "bdev_name": "raid" 00:09:14.395 } 00:09:14.395 ]' 00:09:14.395 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:14.395 { 00:09:14.395 "nbd_device": "/dev/nbd0", 00:09:14.395 "bdev_name": "raid" 00:09:14.395 } 00:09:14.395 ]' 00:09:14.395 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:14.655 4096+0 records in 00:09:14.655 4096+0 records out 00:09:14.655 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0304036 s, 69.0 MB/s 00:09:14.655 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:14.915 4096+0 records in 00:09:14.915 4096+0 records out 00:09:14.915 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.320187 s, 6.5 MB/s 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:14.915 128+0 records in 00:09:14.915 128+0 records out 00:09:14.915 65536 bytes (66 kB, 64 KiB) copied, 0.00100428 s, 65.3 MB/s 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:14.915 2035+0 records in 00:09:14.915 2035+0 records out 00:09:14.915 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0117865 s, 88.4 MB/s 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:14.915 456+0 records in 00:09:14.915 456+0 records out 00:09:14.915 233472 bytes (233 kB, 228 KiB) copied, 0.00354557 s, 65.8 MB/s 00:09:14.915 14:35:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:14.915 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:15.174 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:15.174 [2024-11-04 14:35:14.294575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.432 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:15.432 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:15.432 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.432 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.432 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:15.432 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:09:15.432 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.432 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:15.432 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:15.432 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60259 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60259 ']' 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60259 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60259 00:09:15.691 killing process with pid 60259 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60259' 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60259 00:09:15.691 [2024-11-04 14:35:14.716449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.691 14:35:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60259 00:09:15.691 [2024-11-04 14:35:14.716568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.691 [2024-11-04 14:35:14.716629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.691 [2024-11-04 14:35:14.716661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:15.950 [2024-11-04 14:35:14.902136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.885 ************************************ 00:09:16.885 END TEST raid_function_test_raid0 00:09:16.885 ************************************ 00:09:16.885 14:35:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:09:16.885 00:09:16.885 real 0m4.314s 00:09:16.885 user 0m5.322s 00:09:16.885 sys 0m1.021s 00:09:16.885 14:35:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:16.885 14:35:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:09:16.885 14:35:15 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:09:16.885 14:35:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:16.885 14:35:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:16.885 14:35:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.885 ************************************ 00:09:16.885 START TEST raid_function_test_concat 00:09:16.885 ************************************ 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60388 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:16.885 Process raid pid: 60388 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60388' 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60388 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60388 ']' 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:16.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:16.885 14:35:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:17.144 [2024-11-04 14:35:16.073390] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:17.144 [2024-11-04 14:35:16.073562] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.144 [2024-11-04 14:35:16.260477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.402 [2024-11-04 14:35:16.399377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.661 [2024-11-04 14:35:16.608638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.661 [2024-11-04 14:35:16.608695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.229 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:18.229 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:09:18.229 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:09:18.229 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.229 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:18.229 Base_1 00:09:18.229 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.229 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:09:18.229 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.229 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:18.229 Base_2 00:09:18.229 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:18.230 [2024-11-04 14:35:17.177141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:18.230 [2024-11-04 14:35:17.179518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:18.230 [2024-11-04 14:35:17.179618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:18.230 [2024-11-04 14:35:17.179639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:18.230 [2024-11-04 14:35:17.179973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:18.230 [2024-11-04 14:35:17.180176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:18.230 [2024-11-04 14:35:17.180198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:09:18.230 [2024-11-04 14:35:17.180382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:18.230 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:09:18.489 [2024-11-04 14:35:17.549325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:18.489 /dev/nbd0 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.489 1+0 records in 00:09:18.489 1+0 records out 00:09:18.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347481 s, 11.8 MB/s 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:09:18.489 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:18.747 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:18.747 14:35:17 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:09:18.747 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:18.747 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:18.747 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:09:18.747 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:18.747 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:19.006 { 00:09:19.006 "nbd_device": "/dev/nbd0", 00:09:19.006 "bdev_name": "raid" 00:09:19.006 } 00:09:19.006 ]' 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:19.006 { 00:09:19.006 "nbd_device": "/dev/nbd0", 00:09:19.006 "bdev_name": "raid" 00:09:19.006 } 00:09:19.006 ]' 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:09:19.006 14:35:17 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:09:19.006 4096+0 records in 00:09:19.006 4096+0 records out 00:09:19.006 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0263078 s, 79.7 MB/s 00:09:19.006 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:19.573 4096+0 records in 00:09:19.573 4096+0 records out 00:09:19.573 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.348159 s, 6.0 MB/s 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:19.573 128+0 records in 00:09:19.573 128+0 records out 00:09:19.573 65536 bytes (66 kB, 64 KiB) copied, 0.000816981 s, 80.2 MB/s 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:19.573 2035+0 records in 00:09:19.573 2035+0 records out 00:09:19.573 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0109363 s, 95.3 MB/s 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:19.573 456+0 records in 00:09:19.573 456+0 records out 00:09:19.573 233472 bytes (233 kB, 228 KiB) copied, 0.00333076 s, 70.1 MB/s 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:09:19.573 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:19.574 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:19.574 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:09:19.574 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.574 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:09:19.832 [2024-11-04 14:35:18.806138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:09:19.832 14:35:18 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60388 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60388 ']' 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60388 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:20.091 14:35:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60388 00:09:20.349 14:35:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:20.349 14:35:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:20.349 killing process with pid 60388 00:09:20.349 14:35:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60388' 00:09:20.349 14:35:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60388 00:09:20.349 [2024-11-04 14:35:19.238471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.349 [2024-11-04 14:35:19.238594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.349 14:35:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60388 00:09:20.349 [2024-11-04 14:35:19.238662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.349 [2024-11-04 14:35:19.238682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:09:20.349 [2024-11-04 14:35:19.420702] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.726 14:35:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:09:21.726 00:09:21.726 real 0m4.468s 00:09:21.726 user 0m5.601s 00:09:21.726 sys 0m1.027s 00:09:21.726 14:35:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:21.726 14:35:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:09:21.726 ************************************ 00:09:21.726 END TEST raid_function_test_concat 00:09:21.726 ************************************ 00:09:21.726 14:35:20 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:09:21.726 14:35:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:21.726 14:35:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:21.726 14:35:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.726 ************************************ 00:09:21.726 START TEST raid0_resize_test 00:09:21.726 ************************************ 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60527 00:09:21.726 Process raid pid: 60527 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60527' 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60527 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:21.726 14:35:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60527 ']' 00:09:21.727 14:35:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.727 14:35:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:21.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.727 14:35:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.727 14:35:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:21.727 14:35:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.727 [2024-11-04 14:35:20.599174] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:21.727 [2024-11-04 14:35:20.599373] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.727 [2024-11-04 14:35:20.796523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.985 [2024-11-04 14:35:20.955295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.244 [2024-11-04 14:35:21.194745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.244 [2024-11-04 14:35:21.194796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.811 Base_1 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.811 Base_2 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.811 [2024-11-04 14:35:21.685616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:22.811 [2024-11-04 14:35:21.688008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:22.811 [2024-11-04 14:35:21.688096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:22.811 [2024-11-04 14:35:21.688116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:22.811 [2024-11-04 14:35:21.688413] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:22.811 [2024-11-04 14:35:21.688587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:22.811 [2024-11-04 14:35:21.688613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:22.811 [2024-11-04 14:35:21.688788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.811 [2024-11-04 14:35:21.693610] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:22.811 [2024-11-04 14:35:21.693650] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:22.811 true 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:22.811 [2024-11-04 14:35:21.705813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.811 [2024-11-04 14:35:21.753602] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:22.811 [2024-11-04 14:35:21.753635] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:22.811 [2024-11-04 14:35:21.753690] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:09:22.811 true 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:22.811 [2024-11-04 14:35:21.765824] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60527 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60527 ']' 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60527 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60527 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:22.811 killing process with pid 60527 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60527' 00:09:22.811 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60527 00:09:22.812 [2024-11-04 14:35:21.850613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.812 [2024-11-04 14:35:21.850729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.812 [2024-11-04 14:35:21.850809] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.812 [2024-11-04 14:35:21.850826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:22.812 14:35:21 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60527 00:09:22.812 [2024-11-04 14:35:21.866150] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:24.187 14:35:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:24.187 00:09:24.187 real 0m2.390s 00:09:24.187 user 0m2.719s 00:09:24.187 sys 0m0.391s 00:09:24.187 14:35:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.187 14:35:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.187 ************************************ 00:09:24.187 END TEST raid0_resize_test 00:09:24.187 ************************************ 00:09:24.187 14:35:22 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:09:24.187 14:35:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:24.187 14:35:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:24.187 14:35:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:24.187 ************************************ 00:09:24.187 START TEST raid1_resize_test 00:09:24.187 ************************************ 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60584 00:09:24.187 Process raid pid: 60584 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60584' 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60584 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60584 ']' 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:24.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:24.187 14:35:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.187 [2024-11-04 14:35:23.028385] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:24.187 [2024-11-04 14:35:23.028543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.187 [2024-11-04 14:35:23.210839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.446 [2024-11-04 14:35:23.366991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.704 [2024-11-04 14:35:23.592628] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.704 [2024-11-04 14:35:23.592696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.963 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:24.963 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:09:24.963 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:09:24.963 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.963 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.963 Base_1 00:09:24.963 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.963 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:09:24.963 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.963 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.222 Base_2 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.222 [2024-11-04 14:35:24.094326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:25.222 [2024-11-04 14:35:24.096665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:25.222 [2024-11-04 14:35:24.096760] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:25.222 [2024-11-04 14:35:24.096781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:25.222 [2024-11-04 14:35:24.097098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:09:25.222 [2024-11-04 14:35:24.097287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:25.222 [2024-11-04 14:35:24.097312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:09:25.222 [2024-11-04 14:35:24.097490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.222 [2024-11-04 14:35:24.102313] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:25.222 [2024-11-04 14:35:24.102359] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:25.222 true 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.222 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:09:25.222 [2024-11-04 14:35:24.114512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.223 [2024-11-04 14:35:24.166324] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:25.223 [2024-11-04 14:35:24.166359] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:25.223 [2024-11-04 14:35:24.166401] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:09:25.223 true 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.223 [2024-11-04 14:35:24.178537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60584 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60584 ']' 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60584 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60584 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:25.223 killing process with pid 60584 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60584' 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60584 00:09:25.223 [2024-11-04 14:35:24.251836] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.223 14:35:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60584 00:09:25.223 [2024-11-04 14:35:24.251957] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.223 [2024-11-04 14:35:24.252539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.223 [2024-11-04 14:35:24.252574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:09:25.223 [2024-11-04 14:35:24.267626] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.633 14:35:25 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:09:26.633 00:09:26.633 real 0m2.353s 00:09:26.633 user 0m2.631s 00:09:26.633 sys 0m0.403s 00:09:26.633 14:35:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.633 14:35:25 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.633 ************************************ 00:09:26.633 END TEST raid1_resize_test 00:09:26.633 ************************************ 00:09:26.633 14:35:25 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:26.633 14:35:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:26.633 14:35:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:26.633 14:35:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:26.633 14:35:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.633 14:35:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.633 ************************************ 00:09:26.633 START TEST raid_state_function_test 00:09:26.633 ************************************ 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:26.633 Process raid pid: 60646 00:09:26.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60646 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60646' 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60646 00:09:26.633 14:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60646 ']' 00:09:26.634 14:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:26.634 14:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.634 14:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:26.634 14:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.634 14:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:26.634 14:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.634 [2024-11-04 14:35:25.460958] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:26.634 [2024-11-04 14:35:25.461693] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.634 [2024-11-04 14:35:25.650390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.892 [2024-11-04 14:35:25.769700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.892 [2024-11-04 14:35:25.973279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.892 [2024-11-04 14:35:25.973357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.459 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.460 [2024-11-04 14:35:26.475578] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.460 [2024-11-04 14:35:26.475645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.460 [2024-11-04 14:35:26.475662] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.460 [2024-11-04 14:35:26.475679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.460 "name": "Existed_Raid", 00:09:27.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.460 "strip_size_kb": 64, 00:09:27.460 "state": "configuring", 00:09:27.460 "raid_level": "raid0", 00:09:27.460 "superblock": false, 00:09:27.460 "num_base_bdevs": 2, 00:09:27.460 "num_base_bdevs_discovered": 0, 00:09:27.460 "num_base_bdevs_operational": 2, 00:09:27.460 "base_bdevs_list": [ 00:09:27.460 { 00:09:27.460 "name": "BaseBdev1", 00:09:27.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.460 "is_configured": false, 00:09:27.460 "data_offset": 0, 00:09:27.460 "data_size": 0 00:09:27.460 }, 00:09:27.460 { 00:09:27.460 "name": "BaseBdev2", 00:09:27.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.460 "is_configured": false, 00:09:27.460 "data_offset": 0, 00:09:27.460 "data_size": 0 00:09:27.460 } 00:09:27.460 ] 00:09:27.460 }' 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.460 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.026 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.027 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.027 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 [2024-11-04 14:35:26.967646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.027 [2024-11-04 14:35:26.967693] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:28.027 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.027 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.027 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.027 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 [2024-11-04 14:35:26.975626] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:28.027 [2024-11-04 14:35:26.975683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:28.027 [2024-11-04 14:35:26.975699] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.027 [2024-11-04 14:35:26.975719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.027 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.027 14:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.027 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.027 14:35:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 [2024-11-04 14:35:27.022955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.027 BaseBdev1 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 [ 00:09:28.027 { 00:09:28.027 "name": "BaseBdev1", 00:09:28.027 "aliases": [ 00:09:28.027 "2abbc0ff-d1ba-4675-af88-b14fd1cf99ec" 00:09:28.027 ], 00:09:28.027 "product_name": "Malloc disk", 00:09:28.027 "block_size": 512, 00:09:28.027 "num_blocks": 65536, 00:09:28.027 "uuid": "2abbc0ff-d1ba-4675-af88-b14fd1cf99ec", 00:09:28.027 "assigned_rate_limits": { 00:09:28.027 "rw_ios_per_sec": 0, 00:09:28.027 "rw_mbytes_per_sec": 0, 00:09:28.027 "r_mbytes_per_sec": 0, 00:09:28.027 "w_mbytes_per_sec": 0 00:09:28.027 }, 00:09:28.027 "claimed": true, 00:09:28.027 "claim_type": "exclusive_write", 00:09:28.027 "zoned": false, 00:09:28.027 "supported_io_types": { 00:09:28.027 "read": true, 00:09:28.027 "write": true, 00:09:28.027 "unmap": true, 00:09:28.027 "flush": true, 00:09:28.027 "reset": true, 00:09:28.027 "nvme_admin": false, 00:09:28.027 "nvme_io": false, 00:09:28.027 "nvme_io_md": false, 00:09:28.027 "write_zeroes": true, 00:09:28.027 "zcopy": true, 00:09:28.027 "get_zone_info": false, 00:09:28.027 "zone_management": false, 00:09:28.027 "zone_append": false, 00:09:28.027 "compare": false, 00:09:28.027 "compare_and_write": false, 00:09:28.027 "abort": true, 00:09:28.027 "seek_hole": false, 00:09:28.027 "seek_data": false, 00:09:28.027 "copy": true, 00:09:28.027 "nvme_iov_md": false 00:09:28.027 }, 00:09:28.027 "memory_domains": [ 00:09:28.027 { 00:09:28.027 "dma_device_id": "system", 00:09:28.027 "dma_device_type": 1 00:09:28.027 }, 00:09:28.027 { 00:09:28.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.027 "dma_device_type": 2 00:09:28.027 } 00:09:28.027 ], 00:09:28.027 "driver_specific": {} 00:09:28.027 } 00:09:28.027 ] 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.027 "name": "Existed_Raid", 00:09:28.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.027 "strip_size_kb": 64, 00:09:28.027 "state": "configuring", 00:09:28.027 "raid_level": "raid0", 00:09:28.027 "superblock": false, 00:09:28.027 "num_base_bdevs": 2, 00:09:28.027 "num_base_bdevs_discovered": 1, 00:09:28.027 "num_base_bdevs_operational": 2, 00:09:28.027 "base_bdevs_list": [ 00:09:28.027 { 00:09:28.027 "name": "BaseBdev1", 00:09:28.027 "uuid": "2abbc0ff-d1ba-4675-af88-b14fd1cf99ec", 00:09:28.027 "is_configured": true, 00:09:28.027 "data_offset": 0, 00:09:28.027 "data_size": 65536 00:09:28.027 }, 00:09:28.027 { 00:09:28.027 "name": "BaseBdev2", 00:09:28.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.027 "is_configured": false, 00:09:28.027 "data_offset": 0, 00:09:28.027 "data_size": 0 00:09:28.027 } 00:09:28.027 ] 00:09:28.027 }' 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.027 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.594 [2024-11-04 14:35:27.607160] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.594 [2024-11-04 14:35:27.607229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.594 [2024-11-04 14:35:27.615205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.594 [2024-11-04 14:35:27.617571] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.594 [2024-11-04 14:35:27.617628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.594 "name": "Existed_Raid", 00:09:28.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.594 "strip_size_kb": 64, 00:09:28.594 "state": "configuring", 00:09:28.594 "raid_level": "raid0", 00:09:28.594 "superblock": false, 00:09:28.594 "num_base_bdevs": 2, 00:09:28.594 "num_base_bdevs_discovered": 1, 00:09:28.594 "num_base_bdevs_operational": 2, 00:09:28.594 "base_bdevs_list": [ 00:09:28.594 { 00:09:28.594 "name": "BaseBdev1", 00:09:28.594 "uuid": "2abbc0ff-d1ba-4675-af88-b14fd1cf99ec", 00:09:28.594 "is_configured": true, 00:09:28.594 "data_offset": 0, 00:09:28.594 "data_size": 65536 00:09:28.594 }, 00:09:28.594 { 00:09:28.594 "name": "BaseBdev2", 00:09:28.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.594 "is_configured": false, 00:09:28.594 "data_offset": 0, 00:09:28.594 "data_size": 0 00:09:28.594 } 00:09:28.594 ] 00:09:28.594 }' 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.594 14:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.162 [2024-11-04 14:35:28.169193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.162 [2024-11-04 14:35:28.169249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:29.162 [2024-11-04 14:35:28.169264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:29.162 [2024-11-04 14:35:28.169630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:29.162 [2024-11-04 14:35:28.169844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:29.162 [2024-11-04 14:35:28.169868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:29.162 [2024-11-04 14:35:28.170209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.162 BaseBdev2 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.162 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.162 [ 00:09:29.162 { 00:09:29.162 "name": "BaseBdev2", 00:09:29.162 "aliases": [ 00:09:29.162 "bd17a91c-2e48-43cf-95c5-ea8df609c45e" 00:09:29.162 ], 00:09:29.162 "product_name": "Malloc disk", 00:09:29.163 "block_size": 512, 00:09:29.163 "num_blocks": 65536, 00:09:29.163 "uuid": "bd17a91c-2e48-43cf-95c5-ea8df609c45e", 00:09:29.163 "assigned_rate_limits": { 00:09:29.163 "rw_ios_per_sec": 0, 00:09:29.163 "rw_mbytes_per_sec": 0, 00:09:29.163 "r_mbytes_per_sec": 0, 00:09:29.163 "w_mbytes_per_sec": 0 00:09:29.163 }, 00:09:29.163 "claimed": true, 00:09:29.163 "claim_type": "exclusive_write", 00:09:29.163 "zoned": false, 00:09:29.163 "supported_io_types": { 00:09:29.163 "read": true, 00:09:29.163 "write": true, 00:09:29.163 "unmap": true, 00:09:29.163 "flush": true, 00:09:29.163 "reset": true, 00:09:29.163 "nvme_admin": false, 00:09:29.163 "nvme_io": false, 00:09:29.163 "nvme_io_md": false, 00:09:29.163 "write_zeroes": true, 00:09:29.163 "zcopy": true, 00:09:29.163 "get_zone_info": false, 00:09:29.163 "zone_management": false, 00:09:29.163 "zone_append": false, 00:09:29.163 "compare": false, 00:09:29.163 "compare_and_write": false, 00:09:29.163 "abort": true, 00:09:29.163 "seek_hole": false, 00:09:29.163 "seek_data": false, 00:09:29.163 "copy": true, 00:09:29.163 "nvme_iov_md": false 00:09:29.163 }, 00:09:29.163 "memory_domains": [ 00:09:29.163 { 00:09:29.163 "dma_device_id": "system", 00:09:29.163 "dma_device_type": 1 00:09:29.163 }, 00:09:29.163 { 00:09:29.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.163 "dma_device_type": 2 00:09:29.163 } 00:09:29.163 ], 00:09:29.163 "driver_specific": {} 00:09:29.163 } 00:09:29.163 ] 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.163 "name": "Existed_Raid", 00:09:29.163 "uuid": "02c717f0-fe51-43b8-9c68-46b60b3763c9", 00:09:29.163 "strip_size_kb": 64, 00:09:29.163 "state": "online", 00:09:29.163 "raid_level": "raid0", 00:09:29.163 "superblock": false, 00:09:29.163 "num_base_bdevs": 2, 00:09:29.163 "num_base_bdevs_discovered": 2, 00:09:29.163 "num_base_bdevs_operational": 2, 00:09:29.163 "base_bdevs_list": [ 00:09:29.163 { 00:09:29.163 "name": "BaseBdev1", 00:09:29.163 "uuid": "2abbc0ff-d1ba-4675-af88-b14fd1cf99ec", 00:09:29.163 "is_configured": true, 00:09:29.163 "data_offset": 0, 00:09:29.163 "data_size": 65536 00:09:29.163 }, 00:09:29.163 { 00:09:29.163 "name": "BaseBdev2", 00:09:29.163 "uuid": "bd17a91c-2e48-43cf-95c5-ea8df609c45e", 00:09:29.163 "is_configured": true, 00:09:29.163 "data_offset": 0, 00:09:29.163 "data_size": 65536 00:09:29.163 } 00:09:29.163 ] 00:09:29.163 }' 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.163 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.732 [2024-11-04 14:35:28.753799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.732 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.732 "name": "Existed_Raid", 00:09:29.732 "aliases": [ 00:09:29.732 "02c717f0-fe51-43b8-9c68-46b60b3763c9" 00:09:29.732 ], 00:09:29.732 "product_name": "Raid Volume", 00:09:29.732 "block_size": 512, 00:09:29.732 "num_blocks": 131072, 00:09:29.732 "uuid": "02c717f0-fe51-43b8-9c68-46b60b3763c9", 00:09:29.732 "assigned_rate_limits": { 00:09:29.732 "rw_ios_per_sec": 0, 00:09:29.732 "rw_mbytes_per_sec": 0, 00:09:29.732 "r_mbytes_per_sec": 0, 00:09:29.732 "w_mbytes_per_sec": 0 00:09:29.732 }, 00:09:29.732 "claimed": false, 00:09:29.732 "zoned": false, 00:09:29.732 "supported_io_types": { 00:09:29.732 "read": true, 00:09:29.732 "write": true, 00:09:29.732 "unmap": true, 00:09:29.732 "flush": true, 00:09:29.732 "reset": true, 00:09:29.732 "nvme_admin": false, 00:09:29.732 "nvme_io": false, 00:09:29.732 "nvme_io_md": false, 00:09:29.732 "write_zeroes": true, 00:09:29.732 "zcopy": false, 00:09:29.732 "get_zone_info": false, 00:09:29.732 "zone_management": false, 00:09:29.732 "zone_append": false, 00:09:29.732 "compare": false, 00:09:29.732 "compare_and_write": false, 00:09:29.732 "abort": false, 00:09:29.732 "seek_hole": false, 00:09:29.732 "seek_data": false, 00:09:29.732 "copy": false, 00:09:29.732 "nvme_iov_md": false 00:09:29.732 }, 00:09:29.732 "memory_domains": [ 00:09:29.732 { 00:09:29.732 "dma_device_id": "system", 00:09:29.732 "dma_device_type": 1 00:09:29.732 }, 00:09:29.732 { 00:09:29.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.732 "dma_device_type": 2 00:09:29.732 }, 00:09:29.732 { 00:09:29.732 "dma_device_id": "system", 00:09:29.732 "dma_device_type": 1 00:09:29.732 }, 00:09:29.732 { 00:09:29.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.732 "dma_device_type": 2 00:09:29.732 } 00:09:29.732 ], 00:09:29.732 "driver_specific": { 00:09:29.732 "raid": { 00:09:29.732 "uuid": "02c717f0-fe51-43b8-9c68-46b60b3763c9", 00:09:29.732 "strip_size_kb": 64, 00:09:29.732 "state": "online", 00:09:29.732 "raid_level": "raid0", 00:09:29.732 "superblock": false, 00:09:29.732 "num_base_bdevs": 2, 00:09:29.732 "num_base_bdevs_discovered": 2, 00:09:29.732 "num_base_bdevs_operational": 2, 00:09:29.733 "base_bdevs_list": [ 00:09:29.733 { 00:09:29.733 "name": "BaseBdev1", 00:09:29.733 "uuid": "2abbc0ff-d1ba-4675-af88-b14fd1cf99ec", 00:09:29.733 "is_configured": true, 00:09:29.733 "data_offset": 0, 00:09:29.733 "data_size": 65536 00:09:29.733 }, 00:09:29.733 { 00:09:29.733 "name": "BaseBdev2", 00:09:29.733 "uuid": "bd17a91c-2e48-43cf-95c5-ea8df609c45e", 00:09:29.733 "is_configured": true, 00:09:29.733 "data_offset": 0, 00:09:29.733 "data_size": 65536 00:09:29.733 } 00:09:29.733 ] 00:09:29.733 } 00:09:29.733 } 00:09:29.733 }' 00:09:29.733 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:29.991 BaseBdev2' 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.991 14:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.991 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.991 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.991 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.991 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.991 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.991 [2024-11-04 14:35:29.029575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.991 [2024-11-04 14:35:29.029619] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.991 [2024-11-04 14:35:29.029688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.250 "name": "Existed_Raid", 00:09:30.250 "uuid": "02c717f0-fe51-43b8-9c68-46b60b3763c9", 00:09:30.250 "strip_size_kb": 64, 00:09:30.250 "state": "offline", 00:09:30.250 "raid_level": "raid0", 00:09:30.250 "superblock": false, 00:09:30.250 "num_base_bdevs": 2, 00:09:30.250 "num_base_bdevs_discovered": 1, 00:09:30.250 "num_base_bdevs_operational": 1, 00:09:30.250 "base_bdevs_list": [ 00:09:30.250 { 00:09:30.250 "name": null, 00:09:30.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.250 "is_configured": false, 00:09:30.250 "data_offset": 0, 00:09:30.250 "data_size": 65536 00:09:30.250 }, 00:09:30.250 { 00:09:30.250 "name": "BaseBdev2", 00:09:30.250 "uuid": "bd17a91c-2e48-43cf-95c5-ea8df609c45e", 00:09:30.250 "is_configured": true, 00:09:30.250 "data_offset": 0, 00:09:30.250 "data_size": 65536 00:09:30.250 } 00:09:30.250 ] 00:09:30.250 }' 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.250 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.509 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:30.509 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.768 [2024-11-04 14:35:29.688385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.768 [2024-11-04 14:35:29.688586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60646 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60646 ']' 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60646 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60646 00:09:30.768 killing process with pid 60646 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60646' 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60646 00:09:30.768 [2024-11-04 14:35:29.875409] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.768 14:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60646 00:09:31.027 [2024-11-04 14:35:29.890333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.963 14:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:31.963 00:09:31.963 real 0m5.625s 00:09:31.963 user 0m8.578s 00:09:31.963 sys 0m0.742s 00:09:31.963 14:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:31.963 14:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.963 ************************************ 00:09:31.963 END TEST raid_state_function_test 00:09:31.963 ************************************ 00:09:31.963 14:35:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:31.963 14:35:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:31.963 14:35:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:31.963 14:35:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.963 ************************************ 00:09:31.963 START TEST raid_state_function_test_sb 00:09:31.963 ************************************ 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:31.963 Process raid pid: 60900 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60900 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60900' 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60900 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60900 ']' 00:09:31.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:31.963 14:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.222 [2024-11-04 14:35:31.131559] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:32.222 [2024-11-04 14:35:31.132650] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.222 [2024-11-04 14:35:31.323168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.480 [2024-11-04 14:35:31.487208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.739 [2024-11-04 14:35:31.728512] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.739 [2024-11-04 14:35:31.728772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.998 [2024-11-04 14:35:32.107082] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.998 [2024-11-04 14:35:32.107280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.998 [2024-11-04 14:35:32.107309] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.998 [2024-11-04 14:35:32.107327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.998 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.999 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.999 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.999 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.999 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.999 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.999 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.999 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.257 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.257 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.257 "name": "Existed_Raid", 00:09:33.257 "uuid": "98c4ccdc-8cd3-42dd-a17f-0c7423ebeb93", 00:09:33.257 "strip_size_kb": 64, 00:09:33.257 "state": "configuring", 00:09:33.257 "raid_level": "raid0", 00:09:33.257 "superblock": true, 00:09:33.257 "num_base_bdevs": 2, 00:09:33.257 "num_base_bdevs_discovered": 0, 00:09:33.257 "num_base_bdevs_operational": 2, 00:09:33.257 "base_bdevs_list": [ 00:09:33.257 { 00:09:33.257 "name": "BaseBdev1", 00:09:33.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.257 "is_configured": false, 00:09:33.257 "data_offset": 0, 00:09:33.257 "data_size": 0 00:09:33.257 }, 00:09:33.257 { 00:09:33.257 "name": "BaseBdev2", 00:09:33.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.257 "is_configured": false, 00:09:33.257 "data_offset": 0, 00:09:33.257 "data_size": 0 00:09:33.257 } 00:09:33.257 ] 00:09:33.257 }' 00:09:33.257 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.257 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.516 [2024-11-04 14:35:32.615178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.516 [2024-11-04 14:35:32.615237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.516 [2024-11-04 14:35:32.623175] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.516 [2024-11-04 14:35:32.623236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.516 [2024-11-04 14:35:32.623252] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.516 [2024-11-04 14:35:32.623271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.516 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.775 [2024-11-04 14:35:32.669171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.775 BaseBdev1 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.775 [ 00:09:33.775 { 00:09:33.775 "name": "BaseBdev1", 00:09:33.775 "aliases": [ 00:09:33.775 "645bd4f0-8c28-4f6b-901b-d17c969746ae" 00:09:33.775 ], 00:09:33.775 "product_name": "Malloc disk", 00:09:33.775 "block_size": 512, 00:09:33.775 "num_blocks": 65536, 00:09:33.775 "uuid": "645bd4f0-8c28-4f6b-901b-d17c969746ae", 00:09:33.775 "assigned_rate_limits": { 00:09:33.775 "rw_ios_per_sec": 0, 00:09:33.775 "rw_mbytes_per_sec": 0, 00:09:33.775 "r_mbytes_per_sec": 0, 00:09:33.775 "w_mbytes_per_sec": 0 00:09:33.775 }, 00:09:33.775 "claimed": true, 00:09:33.775 "claim_type": "exclusive_write", 00:09:33.775 "zoned": false, 00:09:33.775 "supported_io_types": { 00:09:33.775 "read": true, 00:09:33.775 "write": true, 00:09:33.775 "unmap": true, 00:09:33.775 "flush": true, 00:09:33.775 "reset": true, 00:09:33.775 "nvme_admin": false, 00:09:33.775 "nvme_io": false, 00:09:33.775 "nvme_io_md": false, 00:09:33.775 "write_zeroes": true, 00:09:33.775 "zcopy": true, 00:09:33.775 "get_zone_info": false, 00:09:33.775 "zone_management": false, 00:09:33.775 "zone_append": false, 00:09:33.775 "compare": false, 00:09:33.775 "compare_and_write": false, 00:09:33.775 "abort": true, 00:09:33.775 "seek_hole": false, 00:09:33.775 "seek_data": false, 00:09:33.775 "copy": true, 00:09:33.775 "nvme_iov_md": false 00:09:33.775 }, 00:09:33.775 "memory_domains": [ 00:09:33.775 { 00:09:33.775 "dma_device_id": "system", 00:09:33.775 "dma_device_type": 1 00:09:33.775 }, 00:09:33.775 { 00:09:33.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.775 "dma_device_type": 2 00:09:33.775 } 00:09:33.775 ], 00:09:33.775 "driver_specific": {} 00:09:33.775 } 00:09:33.775 ] 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.775 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.775 "name": "Existed_Raid", 00:09:33.775 "uuid": "0ee8fb0b-895a-43bf-9fd1-34ac3ffb0ffd", 00:09:33.775 "strip_size_kb": 64, 00:09:33.775 "state": "configuring", 00:09:33.775 "raid_level": "raid0", 00:09:33.775 "superblock": true, 00:09:33.775 "num_base_bdevs": 2, 00:09:33.775 "num_base_bdevs_discovered": 1, 00:09:33.775 "num_base_bdevs_operational": 2, 00:09:33.775 "base_bdevs_list": [ 00:09:33.775 { 00:09:33.775 "name": "BaseBdev1", 00:09:33.775 "uuid": "645bd4f0-8c28-4f6b-901b-d17c969746ae", 00:09:33.775 "is_configured": true, 00:09:33.775 "data_offset": 2048, 00:09:33.775 "data_size": 63488 00:09:33.775 }, 00:09:33.776 { 00:09:33.776 "name": "BaseBdev2", 00:09:33.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.776 "is_configured": false, 00:09:33.776 "data_offset": 0, 00:09:33.776 "data_size": 0 00:09:33.776 } 00:09:33.776 ] 00:09:33.776 }' 00:09:33.776 14:35:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.776 14:35:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.343 [2024-11-04 14:35:33.229408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.343 [2024-11-04 14:35:33.229466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.343 [2024-11-04 14:35:33.241459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.343 [2024-11-04 14:35:33.244133] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.343 [2024-11-04 14:35:33.244317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.343 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.343 "name": "Existed_Raid", 00:09:34.343 "uuid": "f740dfee-eb95-403d-9f79-3dfa0c36ad21", 00:09:34.343 "strip_size_kb": 64, 00:09:34.343 "state": "configuring", 00:09:34.343 "raid_level": "raid0", 00:09:34.343 "superblock": true, 00:09:34.343 "num_base_bdevs": 2, 00:09:34.343 "num_base_bdevs_discovered": 1, 00:09:34.343 "num_base_bdevs_operational": 2, 00:09:34.343 "base_bdevs_list": [ 00:09:34.343 { 00:09:34.343 "name": "BaseBdev1", 00:09:34.344 "uuid": "645bd4f0-8c28-4f6b-901b-d17c969746ae", 00:09:34.344 "is_configured": true, 00:09:34.344 "data_offset": 2048, 00:09:34.344 "data_size": 63488 00:09:34.344 }, 00:09:34.344 { 00:09:34.344 "name": "BaseBdev2", 00:09:34.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.344 "is_configured": false, 00:09:34.344 "data_offset": 0, 00:09:34.344 "data_size": 0 00:09:34.344 } 00:09:34.344 ] 00:09:34.344 }' 00:09:34.344 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.344 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.910 [2024-11-04 14:35:33.813402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.910 [2024-11-04 14:35:33.813876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.910 BaseBdev2 00:09:34.910 [2024-11-04 14:35:33.814053] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:34.910 [2024-11-04 14:35:33.814401] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:34.910 [2024-11-04 14:35:33.814600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.910 [2024-11-04 14:35:33.814621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:34.910 [2024-11-04 14:35:33.814792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.910 [ 00:09:34.910 { 00:09:34.910 "name": "BaseBdev2", 00:09:34.910 "aliases": [ 00:09:34.910 "ea91e30e-1e4a-495b-bf85-0b3e0ead784f" 00:09:34.910 ], 00:09:34.910 "product_name": "Malloc disk", 00:09:34.910 "block_size": 512, 00:09:34.910 "num_blocks": 65536, 00:09:34.910 "uuid": "ea91e30e-1e4a-495b-bf85-0b3e0ead784f", 00:09:34.910 "assigned_rate_limits": { 00:09:34.910 "rw_ios_per_sec": 0, 00:09:34.910 "rw_mbytes_per_sec": 0, 00:09:34.910 "r_mbytes_per_sec": 0, 00:09:34.910 "w_mbytes_per_sec": 0 00:09:34.910 }, 00:09:34.910 "claimed": true, 00:09:34.910 "claim_type": "exclusive_write", 00:09:34.910 "zoned": false, 00:09:34.910 "supported_io_types": { 00:09:34.910 "read": true, 00:09:34.910 "write": true, 00:09:34.910 "unmap": true, 00:09:34.910 "flush": true, 00:09:34.910 "reset": true, 00:09:34.910 "nvme_admin": false, 00:09:34.910 "nvme_io": false, 00:09:34.910 "nvme_io_md": false, 00:09:34.910 "write_zeroes": true, 00:09:34.910 "zcopy": true, 00:09:34.910 "get_zone_info": false, 00:09:34.910 "zone_management": false, 00:09:34.910 "zone_append": false, 00:09:34.910 "compare": false, 00:09:34.910 "compare_and_write": false, 00:09:34.910 "abort": true, 00:09:34.910 "seek_hole": false, 00:09:34.910 "seek_data": false, 00:09:34.910 "copy": true, 00:09:34.910 "nvme_iov_md": false 00:09:34.910 }, 00:09:34.910 "memory_domains": [ 00:09:34.910 { 00:09:34.910 "dma_device_id": "system", 00:09:34.910 "dma_device_type": 1 00:09:34.910 }, 00:09:34.910 { 00:09:34.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.910 "dma_device_type": 2 00:09:34.910 } 00:09:34.910 ], 00:09:34.910 "driver_specific": {} 00:09:34.910 } 00:09:34.910 ] 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.910 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.910 "name": "Existed_Raid", 00:09:34.910 "uuid": "f740dfee-eb95-403d-9f79-3dfa0c36ad21", 00:09:34.910 "strip_size_kb": 64, 00:09:34.910 "state": "online", 00:09:34.910 "raid_level": "raid0", 00:09:34.910 "superblock": true, 00:09:34.910 "num_base_bdevs": 2, 00:09:34.910 "num_base_bdevs_discovered": 2, 00:09:34.910 "num_base_bdevs_operational": 2, 00:09:34.910 "base_bdevs_list": [ 00:09:34.910 { 00:09:34.910 "name": "BaseBdev1", 00:09:34.910 "uuid": "645bd4f0-8c28-4f6b-901b-d17c969746ae", 00:09:34.910 "is_configured": true, 00:09:34.910 "data_offset": 2048, 00:09:34.910 "data_size": 63488 00:09:34.910 }, 00:09:34.910 { 00:09:34.911 "name": "BaseBdev2", 00:09:34.911 "uuid": "ea91e30e-1e4a-495b-bf85-0b3e0ead784f", 00:09:34.911 "is_configured": true, 00:09:34.911 "data_offset": 2048, 00:09:34.911 "data_size": 63488 00:09:34.911 } 00:09:34.911 ] 00:09:34.911 }' 00:09:34.911 14:35:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.911 14:35:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.478 [2024-11-04 14:35:34.389970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.478 "name": "Existed_Raid", 00:09:35.478 "aliases": [ 00:09:35.478 "f740dfee-eb95-403d-9f79-3dfa0c36ad21" 00:09:35.478 ], 00:09:35.478 "product_name": "Raid Volume", 00:09:35.478 "block_size": 512, 00:09:35.478 "num_blocks": 126976, 00:09:35.478 "uuid": "f740dfee-eb95-403d-9f79-3dfa0c36ad21", 00:09:35.478 "assigned_rate_limits": { 00:09:35.478 "rw_ios_per_sec": 0, 00:09:35.478 "rw_mbytes_per_sec": 0, 00:09:35.478 "r_mbytes_per_sec": 0, 00:09:35.478 "w_mbytes_per_sec": 0 00:09:35.478 }, 00:09:35.478 "claimed": false, 00:09:35.478 "zoned": false, 00:09:35.478 "supported_io_types": { 00:09:35.478 "read": true, 00:09:35.478 "write": true, 00:09:35.478 "unmap": true, 00:09:35.478 "flush": true, 00:09:35.478 "reset": true, 00:09:35.478 "nvme_admin": false, 00:09:35.478 "nvme_io": false, 00:09:35.478 "nvme_io_md": false, 00:09:35.478 "write_zeroes": true, 00:09:35.478 "zcopy": false, 00:09:35.478 "get_zone_info": false, 00:09:35.478 "zone_management": false, 00:09:35.478 "zone_append": false, 00:09:35.478 "compare": false, 00:09:35.478 "compare_and_write": false, 00:09:35.478 "abort": false, 00:09:35.478 "seek_hole": false, 00:09:35.478 "seek_data": false, 00:09:35.478 "copy": false, 00:09:35.478 "nvme_iov_md": false 00:09:35.478 }, 00:09:35.478 "memory_domains": [ 00:09:35.478 { 00:09:35.478 "dma_device_id": "system", 00:09:35.478 "dma_device_type": 1 00:09:35.478 }, 00:09:35.478 { 00:09:35.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.478 "dma_device_type": 2 00:09:35.478 }, 00:09:35.478 { 00:09:35.478 "dma_device_id": "system", 00:09:35.478 "dma_device_type": 1 00:09:35.478 }, 00:09:35.478 { 00:09:35.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.478 "dma_device_type": 2 00:09:35.478 } 00:09:35.478 ], 00:09:35.478 "driver_specific": { 00:09:35.478 "raid": { 00:09:35.478 "uuid": "f740dfee-eb95-403d-9f79-3dfa0c36ad21", 00:09:35.478 "strip_size_kb": 64, 00:09:35.478 "state": "online", 00:09:35.478 "raid_level": "raid0", 00:09:35.478 "superblock": true, 00:09:35.478 "num_base_bdevs": 2, 00:09:35.478 "num_base_bdevs_discovered": 2, 00:09:35.478 "num_base_bdevs_operational": 2, 00:09:35.478 "base_bdevs_list": [ 00:09:35.478 { 00:09:35.478 "name": "BaseBdev1", 00:09:35.478 "uuid": "645bd4f0-8c28-4f6b-901b-d17c969746ae", 00:09:35.478 "is_configured": true, 00:09:35.478 "data_offset": 2048, 00:09:35.478 "data_size": 63488 00:09:35.478 }, 00:09:35.478 { 00:09:35.478 "name": "BaseBdev2", 00:09:35.478 "uuid": "ea91e30e-1e4a-495b-bf85-0b3e0ead784f", 00:09:35.478 "is_configured": true, 00:09:35.478 "data_offset": 2048, 00:09:35.478 "data_size": 63488 00:09:35.478 } 00:09:35.478 ] 00:09:35.478 } 00:09:35.478 } 00:09:35.478 }' 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.478 BaseBdev2' 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.478 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.736 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.736 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.736 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.736 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.736 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.736 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.736 [2024-11-04 14:35:34.637808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.737 [2024-11-04 14:35:34.637848] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.737 [2024-11-04 14:35:34.637912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.737 "name": "Existed_Raid", 00:09:35.737 "uuid": "f740dfee-eb95-403d-9f79-3dfa0c36ad21", 00:09:35.737 "strip_size_kb": 64, 00:09:35.737 "state": "offline", 00:09:35.737 "raid_level": "raid0", 00:09:35.737 "superblock": true, 00:09:35.737 "num_base_bdevs": 2, 00:09:35.737 "num_base_bdevs_discovered": 1, 00:09:35.737 "num_base_bdevs_operational": 1, 00:09:35.737 "base_bdevs_list": [ 00:09:35.737 { 00:09:35.737 "name": null, 00:09:35.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.737 "is_configured": false, 00:09:35.737 "data_offset": 0, 00:09:35.737 "data_size": 63488 00:09:35.737 }, 00:09:35.737 { 00:09:35.737 "name": "BaseBdev2", 00:09:35.737 "uuid": "ea91e30e-1e4a-495b-bf85-0b3e0ead784f", 00:09:35.737 "is_configured": true, 00:09:35.737 "data_offset": 2048, 00:09:35.737 "data_size": 63488 00:09:35.737 } 00:09:35.737 ] 00:09:35.737 }' 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.737 14:35:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.304 [2024-11-04 14:35:35.289443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.304 [2024-11-04 14:35:35.289508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.304 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60900 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60900 ']' 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60900 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60900 00:09:36.563 killing process with pid 60900 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60900' 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60900 00:09:36.563 [2024-11-04 14:35:35.464118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.563 14:35:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60900 00:09:36.563 [2024-11-04 14:35:35.478993] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.499 14:35:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:37.499 00:09:37.499 real 0m5.495s 00:09:37.499 user 0m8.333s 00:09:37.499 sys 0m0.746s 00:09:37.499 14:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:37.499 ************************************ 00:09:37.499 END TEST raid_state_function_test_sb 00:09:37.499 ************************************ 00:09:37.499 14:35:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.499 14:35:36 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:37.499 14:35:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:37.499 14:35:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:37.499 14:35:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.499 ************************************ 00:09:37.499 START TEST raid_superblock_test 00:09:37.499 ************************************ 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61157 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61157 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61157 ']' 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:37.499 14:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.758 [2024-11-04 14:35:36.671736] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:37.758 [2024-11-04 14:35:36.672175] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61157 ] 00:09:37.758 [2024-11-04 14:35:36.860955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.016 [2024-11-04 14:35:37.017370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.283 [2024-11-04 14:35:37.249980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.283 [2024-11-04 14:35:37.250286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.850 malloc1 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.850 [2024-11-04 14:35:37.768204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:38.850 [2024-11-04 14:35:37.768292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.850 [2024-11-04 14:35:37.768341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:38.850 [2024-11-04 14:35:37.768365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.850 [2024-11-04 14:35:37.771327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.850 [2024-11-04 14:35:37.771377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:38.850 pt1 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:38.850 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.851 malloc2 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.851 [2024-11-04 14:35:37.820035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:38.851 [2024-11-04 14:35:37.820244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.851 [2024-11-04 14:35:37.820301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:38.851 [2024-11-04 14:35:37.820326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.851 [2024-11-04 14:35:37.823232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.851 [2024-11-04 14:35:37.823403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:38.851 pt2 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.851 [2024-11-04 14:35:37.828129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:38.851 [2024-11-04 14:35:37.830548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:38.851 [2024-11-04 14:35:37.830886] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:38.851 [2024-11-04 14:35:37.830911] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:38.851 [2024-11-04 14:35:37.831262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:38.851 [2024-11-04 14:35:37.831458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:38.851 [2024-11-04 14:35:37.831481] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:38.851 [2024-11-04 14:35:37.831657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.851 "name": "raid_bdev1", 00:09:38.851 "uuid": "8ba75a2c-53bd-4691-b239-7985ea31b1f2", 00:09:38.851 "strip_size_kb": 64, 00:09:38.851 "state": "online", 00:09:38.851 "raid_level": "raid0", 00:09:38.851 "superblock": true, 00:09:38.851 "num_base_bdevs": 2, 00:09:38.851 "num_base_bdevs_discovered": 2, 00:09:38.851 "num_base_bdevs_operational": 2, 00:09:38.851 "base_bdevs_list": [ 00:09:38.851 { 00:09:38.851 "name": "pt1", 00:09:38.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.851 "is_configured": true, 00:09:38.851 "data_offset": 2048, 00:09:38.851 "data_size": 63488 00:09:38.851 }, 00:09:38.851 { 00:09:38.851 "name": "pt2", 00:09:38.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.851 "is_configured": true, 00:09:38.851 "data_offset": 2048, 00:09:38.851 "data_size": 63488 00:09:38.851 } 00:09:38.851 ] 00:09:38.851 }' 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.851 14:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.418 [2024-11-04 14:35:38.352670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.418 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.418 "name": "raid_bdev1", 00:09:39.418 "aliases": [ 00:09:39.418 "8ba75a2c-53bd-4691-b239-7985ea31b1f2" 00:09:39.418 ], 00:09:39.418 "product_name": "Raid Volume", 00:09:39.418 "block_size": 512, 00:09:39.418 "num_blocks": 126976, 00:09:39.418 "uuid": "8ba75a2c-53bd-4691-b239-7985ea31b1f2", 00:09:39.418 "assigned_rate_limits": { 00:09:39.418 "rw_ios_per_sec": 0, 00:09:39.418 "rw_mbytes_per_sec": 0, 00:09:39.418 "r_mbytes_per_sec": 0, 00:09:39.418 "w_mbytes_per_sec": 0 00:09:39.418 }, 00:09:39.418 "claimed": false, 00:09:39.418 "zoned": false, 00:09:39.418 "supported_io_types": { 00:09:39.418 "read": true, 00:09:39.418 "write": true, 00:09:39.418 "unmap": true, 00:09:39.418 "flush": true, 00:09:39.418 "reset": true, 00:09:39.418 "nvme_admin": false, 00:09:39.418 "nvme_io": false, 00:09:39.418 "nvme_io_md": false, 00:09:39.418 "write_zeroes": true, 00:09:39.418 "zcopy": false, 00:09:39.418 "get_zone_info": false, 00:09:39.418 "zone_management": false, 00:09:39.418 "zone_append": false, 00:09:39.418 "compare": false, 00:09:39.418 "compare_and_write": false, 00:09:39.418 "abort": false, 00:09:39.418 "seek_hole": false, 00:09:39.418 "seek_data": false, 00:09:39.418 "copy": false, 00:09:39.418 "nvme_iov_md": false 00:09:39.418 }, 00:09:39.418 "memory_domains": [ 00:09:39.418 { 00:09:39.418 "dma_device_id": "system", 00:09:39.419 "dma_device_type": 1 00:09:39.419 }, 00:09:39.419 { 00:09:39.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.419 "dma_device_type": 2 00:09:39.419 }, 00:09:39.419 { 00:09:39.419 "dma_device_id": "system", 00:09:39.419 "dma_device_type": 1 00:09:39.419 }, 00:09:39.419 { 00:09:39.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.419 "dma_device_type": 2 00:09:39.419 } 00:09:39.419 ], 00:09:39.419 "driver_specific": { 00:09:39.419 "raid": { 00:09:39.419 "uuid": "8ba75a2c-53bd-4691-b239-7985ea31b1f2", 00:09:39.419 "strip_size_kb": 64, 00:09:39.419 "state": "online", 00:09:39.419 "raid_level": "raid0", 00:09:39.419 "superblock": true, 00:09:39.419 "num_base_bdevs": 2, 00:09:39.419 "num_base_bdevs_discovered": 2, 00:09:39.419 "num_base_bdevs_operational": 2, 00:09:39.419 "base_bdevs_list": [ 00:09:39.419 { 00:09:39.419 "name": "pt1", 00:09:39.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.419 "is_configured": true, 00:09:39.419 "data_offset": 2048, 00:09:39.419 "data_size": 63488 00:09:39.419 }, 00:09:39.419 { 00:09:39.419 "name": "pt2", 00:09:39.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.419 "is_configured": true, 00:09:39.419 "data_offset": 2048, 00:09:39.419 "data_size": 63488 00:09:39.419 } 00:09:39.419 ] 00:09:39.419 } 00:09:39.419 } 00:09:39.419 }' 00:09:39.419 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.419 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:39.419 pt2' 00:09:39.419 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.419 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.419 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.419 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:39.419 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.419 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.419 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.419 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.677 [2024-11-04 14:35:38.612664] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8ba75a2c-53bd-4691-b239-7985ea31b1f2 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8ba75a2c-53bd-4691-b239-7985ea31b1f2 ']' 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.677 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.677 [2024-11-04 14:35:38.684317] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.677 [2024-11-04 14:35:38.684347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.677 [2024-11-04 14:35:38.684443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.677 [2024-11-04 14:35:38.684565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.677 [2024-11-04 14:35:38.684583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.678 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.936 [2024-11-04 14:35:38.832391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:39.936 [2024-11-04 14:35:38.835000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:39.936 [2024-11-04 14:35:38.835089] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:39.936 [2024-11-04 14:35:38.835162] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:39.936 [2024-11-04 14:35:38.835189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.936 [2024-11-04 14:35:38.835207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:39.936 request: 00:09:39.936 { 00:09:39.936 "name": "raid_bdev1", 00:09:39.936 "raid_level": "raid0", 00:09:39.936 "base_bdevs": [ 00:09:39.936 "malloc1", 00:09:39.936 "malloc2" 00:09:39.936 ], 00:09:39.936 "strip_size_kb": 64, 00:09:39.936 "superblock": false, 00:09:39.936 "method": "bdev_raid_create", 00:09:39.936 "req_id": 1 00:09:39.936 } 00:09:39.936 Got JSON-RPC error response 00:09:39.936 response: 00:09:39.936 { 00:09:39.936 "code": -17, 00:09:39.936 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:39.936 } 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:39.936 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.937 [2024-11-04 14:35:38.900426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.937 [2024-11-04 14:35:38.900676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.937 [2024-11-04 14:35:38.900751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:39.937 [2024-11-04 14:35:38.900978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.937 [2024-11-04 14:35:38.904104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.937 [2024-11-04 14:35:38.904268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.937 [2024-11-04 14:35:38.904484] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:39.937 [2024-11-04 14:35:38.904668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.937 pt1 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.937 "name": "raid_bdev1", 00:09:39.937 "uuid": "8ba75a2c-53bd-4691-b239-7985ea31b1f2", 00:09:39.937 "strip_size_kb": 64, 00:09:39.937 "state": "configuring", 00:09:39.937 "raid_level": "raid0", 00:09:39.937 "superblock": true, 00:09:39.937 "num_base_bdevs": 2, 00:09:39.937 "num_base_bdevs_discovered": 1, 00:09:39.937 "num_base_bdevs_operational": 2, 00:09:39.937 "base_bdevs_list": [ 00:09:39.937 { 00:09:39.937 "name": "pt1", 00:09:39.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.937 "is_configured": true, 00:09:39.937 "data_offset": 2048, 00:09:39.937 "data_size": 63488 00:09:39.937 }, 00:09:39.937 { 00:09:39.937 "name": null, 00:09:39.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.937 "is_configured": false, 00:09:39.937 "data_offset": 2048, 00:09:39.937 "data_size": 63488 00:09:39.937 } 00:09:39.937 ] 00:09:39.937 }' 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.937 14:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.541 [2024-11-04 14:35:39.436761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:40.541 [2024-11-04 14:35:39.436850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.541 [2024-11-04 14:35:39.436882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:40.541 [2024-11-04 14:35:39.436900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.541 [2024-11-04 14:35:39.437498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.541 [2024-11-04 14:35:39.437536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:40.541 [2024-11-04 14:35:39.437634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:40.541 [2024-11-04 14:35:39.437671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:40.541 [2024-11-04 14:35:39.437812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:40.541 [2024-11-04 14:35:39.437833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:40.541 [2024-11-04 14:35:39.438160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:40.541 [2024-11-04 14:35:39.438496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:40.541 [2024-11-04 14:35:39.438519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:40.541 [2024-11-04 14:35:39.438691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.541 pt2 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.541 "name": "raid_bdev1", 00:09:40.541 "uuid": "8ba75a2c-53bd-4691-b239-7985ea31b1f2", 00:09:40.541 "strip_size_kb": 64, 00:09:40.541 "state": "online", 00:09:40.541 "raid_level": "raid0", 00:09:40.541 "superblock": true, 00:09:40.541 "num_base_bdevs": 2, 00:09:40.541 "num_base_bdevs_discovered": 2, 00:09:40.541 "num_base_bdevs_operational": 2, 00:09:40.541 "base_bdevs_list": [ 00:09:40.541 { 00:09:40.541 "name": "pt1", 00:09:40.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.541 "is_configured": true, 00:09:40.541 "data_offset": 2048, 00:09:40.541 "data_size": 63488 00:09:40.541 }, 00:09:40.541 { 00:09:40.541 "name": "pt2", 00:09:40.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.541 "is_configured": true, 00:09:40.541 "data_offset": 2048, 00:09:40.541 "data_size": 63488 00:09:40.541 } 00:09:40.541 ] 00:09:40.541 }' 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.541 14:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:41.108 [2024-11-04 14:35:39.961283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.108 14:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.108 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:41.108 "name": "raid_bdev1", 00:09:41.108 "aliases": [ 00:09:41.108 "8ba75a2c-53bd-4691-b239-7985ea31b1f2" 00:09:41.108 ], 00:09:41.108 "product_name": "Raid Volume", 00:09:41.108 "block_size": 512, 00:09:41.108 "num_blocks": 126976, 00:09:41.108 "uuid": "8ba75a2c-53bd-4691-b239-7985ea31b1f2", 00:09:41.108 "assigned_rate_limits": { 00:09:41.108 "rw_ios_per_sec": 0, 00:09:41.108 "rw_mbytes_per_sec": 0, 00:09:41.108 "r_mbytes_per_sec": 0, 00:09:41.108 "w_mbytes_per_sec": 0 00:09:41.108 }, 00:09:41.108 "claimed": false, 00:09:41.108 "zoned": false, 00:09:41.108 "supported_io_types": { 00:09:41.108 "read": true, 00:09:41.108 "write": true, 00:09:41.108 "unmap": true, 00:09:41.108 "flush": true, 00:09:41.108 "reset": true, 00:09:41.108 "nvme_admin": false, 00:09:41.108 "nvme_io": false, 00:09:41.108 "nvme_io_md": false, 00:09:41.108 "write_zeroes": true, 00:09:41.108 "zcopy": false, 00:09:41.108 "get_zone_info": false, 00:09:41.108 "zone_management": false, 00:09:41.108 "zone_append": false, 00:09:41.108 "compare": false, 00:09:41.108 "compare_and_write": false, 00:09:41.108 "abort": false, 00:09:41.108 "seek_hole": false, 00:09:41.108 "seek_data": false, 00:09:41.108 "copy": false, 00:09:41.108 "nvme_iov_md": false 00:09:41.108 }, 00:09:41.108 "memory_domains": [ 00:09:41.108 { 00:09:41.108 "dma_device_id": "system", 00:09:41.108 "dma_device_type": 1 00:09:41.108 }, 00:09:41.108 { 00:09:41.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.108 "dma_device_type": 2 00:09:41.108 }, 00:09:41.108 { 00:09:41.108 "dma_device_id": "system", 00:09:41.108 "dma_device_type": 1 00:09:41.108 }, 00:09:41.108 { 00:09:41.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.108 "dma_device_type": 2 00:09:41.108 } 00:09:41.108 ], 00:09:41.108 "driver_specific": { 00:09:41.108 "raid": { 00:09:41.108 "uuid": "8ba75a2c-53bd-4691-b239-7985ea31b1f2", 00:09:41.108 "strip_size_kb": 64, 00:09:41.108 "state": "online", 00:09:41.108 "raid_level": "raid0", 00:09:41.108 "superblock": true, 00:09:41.108 "num_base_bdevs": 2, 00:09:41.108 "num_base_bdevs_discovered": 2, 00:09:41.108 "num_base_bdevs_operational": 2, 00:09:41.108 "base_bdevs_list": [ 00:09:41.108 { 00:09:41.108 "name": "pt1", 00:09:41.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.108 "is_configured": true, 00:09:41.108 "data_offset": 2048, 00:09:41.108 "data_size": 63488 00:09:41.108 }, 00:09:41.108 { 00:09:41.108 "name": "pt2", 00:09:41.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.108 "is_configured": true, 00:09:41.108 "data_offset": 2048, 00:09:41.108 "data_size": 63488 00:09:41.108 } 00:09:41.108 ] 00:09:41.108 } 00:09:41.108 } 00:09:41.108 }' 00:09:41.108 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:41.108 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:41.108 pt2' 00:09:41.108 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.108 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:41.108 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.109 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.109 [2024-11-04 14:35:40.225375] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8ba75a2c-53bd-4691-b239-7985ea31b1f2 '!=' 8ba75a2c-53bd-4691-b239-7985ea31b1f2 ']' 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61157 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61157 ']' 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61157 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61157 00:09:41.368 killing process with pid 61157 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61157' 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61157 00:09:41.368 [2024-11-04 14:35:40.309338] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.368 14:35:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61157 00:09:41.368 [2024-11-04 14:35:40.309455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.368 [2024-11-04 14:35:40.309521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.368 [2024-11-04 14:35:40.309555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:41.627 [2024-11-04 14:35:40.501800] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.562 ************************************ 00:09:42.562 END TEST raid_superblock_test 00:09:42.562 ************************************ 00:09:42.562 14:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:42.562 00:09:42.562 real 0m4.976s 00:09:42.562 user 0m7.388s 00:09:42.562 sys 0m0.706s 00:09:42.562 14:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.562 14:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.562 14:35:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:09:42.562 14:35:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:42.562 14:35:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:42.562 14:35:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.562 ************************************ 00:09:42.562 START TEST raid_read_error_test 00:09:42.562 ************************************ 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iIT7xFDxry 00:09:42.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61369 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61369 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61369 ']' 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:42.562 14:35:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.821 [2024-11-04 14:35:41.698264] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:42.821 [2024-11-04 14:35:41.698492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61369 ] 00:09:42.821 [2024-11-04 14:35:41.882465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.078 [2024-11-04 14:35:42.039117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.336 [2024-11-04 14:35:42.246973] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.336 [2024-11-04 14:35:42.247339] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.902 BaseBdev1_malloc 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.902 true 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.902 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.902 [2024-11-04 14:35:42.797359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:43.903 [2024-11-04 14:35:42.797452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.903 [2024-11-04 14:35:42.797499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:43.903 [2024-11-04 14:35:42.797532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.903 [2024-11-04 14:35:42.800635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.903 [2024-11-04 14:35:42.800702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:43.903 BaseBdev1 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.903 BaseBdev2_malloc 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.903 true 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.903 [2024-11-04 14:35:42.852449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:43.903 [2024-11-04 14:35:42.852523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.903 [2024-11-04 14:35:42.852564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:43.903 [2024-11-04 14:35:42.852581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.903 [2024-11-04 14:35:42.855503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.903 [2024-11-04 14:35:42.855571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:43.903 BaseBdev2 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.903 [2024-11-04 14:35:42.860569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.903 [2024-11-04 14:35:42.863158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.903 [2024-11-04 14:35:42.863421] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:43.903 [2024-11-04 14:35:42.863449] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:43.903 [2024-11-04 14:35:42.863755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:43.903 [2024-11-04 14:35:42.864049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:43.903 [2024-11-04 14:35:42.864071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:43.903 [2024-11-04 14:35:42.864262] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.903 "name": "raid_bdev1", 00:09:43.903 "uuid": "65a82d8c-f092-4001-98aa-28ec656bda71", 00:09:43.903 "strip_size_kb": 64, 00:09:43.903 "state": "online", 00:09:43.903 "raid_level": "raid0", 00:09:43.903 "superblock": true, 00:09:43.903 "num_base_bdevs": 2, 00:09:43.903 "num_base_bdevs_discovered": 2, 00:09:43.903 "num_base_bdevs_operational": 2, 00:09:43.903 "base_bdevs_list": [ 00:09:43.903 { 00:09:43.903 "name": "BaseBdev1", 00:09:43.903 "uuid": "e967be11-4c5d-5141-8bab-6bef476d2cf7", 00:09:43.903 "is_configured": true, 00:09:43.903 "data_offset": 2048, 00:09:43.903 "data_size": 63488 00:09:43.903 }, 00:09:43.903 { 00:09:43.903 "name": "BaseBdev2", 00:09:43.903 "uuid": "3c625084-a132-521e-9a3f-e8156beca407", 00:09:43.903 "is_configured": true, 00:09:43.903 "data_offset": 2048, 00:09:43.903 "data_size": 63488 00:09:43.903 } 00:09:43.903 ] 00:09:43.903 }' 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.903 14:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.478 14:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:44.478 14:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:44.478 [2024-11-04 14:35:43.530416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.413 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.414 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.414 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.414 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.414 "name": "raid_bdev1", 00:09:45.414 "uuid": "65a82d8c-f092-4001-98aa-28ec656bda71", 00:09:45.414 "strip_size_kb": 64, 00:09:45.414 "state": "online", 00:09:45.414 "raid_level": "raid0", 00:09:45.414 "superblock": true, 00:09:45.414 "num_base_bdevs": 2, 00:09:45.414 "num_base_bdevs_discovered": 2, 00:09:45.414 "num_base_bdevs_operational": 2, 00:09:45.414 "base_bdevs_list": [ 00:09:45.414 { 00:09:45.414 "name": "BaseBdev1", 00:09:45.414 "uuid": "e967be11-4c5d-5141-8bab-6bef476d2cf7", 00:09:45.414 "is_configured": true, 00:09:45.414 "data_offset": 2048, 00:09:45.414 "data_size": 63488 00:09:45.414 }, 00:09:45.414 { 00:09:45.414 "name": "BaseBdev2", 00:09:45.414 "uuid": "3c625084-a132-521e-9a3f-e8156beca407", 00:09:45.414 "is_configured": true, 00:09:45.414 "data_offset": 2048, 00:09:45.414 "data_size": 63488 00:09:45.414 } 00:09:45.414 ] 00:09:45.414 }' 00:09:45.414 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.414 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.980 [2024-11-04 14:35:44.932222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.980 [2024-11-04 14:35:44.932267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.980 [2024-11-04 14:35:44.935980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.980 [2024-11-04 14:35:44.936197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.980 [2024-11-04 14:35:44.936290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.980 [2024-11-04 14:35:44.936548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.980 { 00:09:45.980 "results": [ 00:09:45.980 { 00:09:45.980 "job": "raid_bdev1", 00:09:45.980 "core_mask": "0x1", 00:09:45.980 "workload": "randrw", 00:09:45.980 "percentage": 50, 00:09:45.980 "status": "finished", 00:09:45.980 "queue_depth": 1, 00:09:45.980 "io_size": 131072, 00:09:45.980 "runtime": 1.398882, 00:09:45.980 "iops": 10441.910039588758, 00:09:45.980 "mibps": 1305.2387549485948, 00:09:45.980 "io_failed": 1, 00:09:45.980 "io_timeout": 0, 00:09:45.980 "avg_latency_us": 133.77646519964154, 00:09:45.980 "min_latency_us": 36.53818181818182, 00:09:45.980 "max_latency_us": 1846.9236363636364 00:09:45.980 } 00:09:45.980 ], 00:09:45.980 "core_count": 1 00:09:45.980 } 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61369 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61369 ']' 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61369 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61369 00:09:45.980 killing process with pid 61369 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61369' 00:09:45.980 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61369 00:09:45.980 [2024-11-04 14:35:44.971175] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.981 14:35:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61369 00:09:45.981 [2024-11-04 14:35:45.095240] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.398 14:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iIT7xFDxry 00:09:47.398 14:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:47.398 14:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:47.398 14:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:47.398 14:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:47.398 14:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.398 14:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.398 14:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:47.398 00:09:47.398 real 0m4.540s 00:09:47.398 user 0m5.749s 00:09:47.398 sys 0m0.584s 00:09:47.398 14:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.398 ************************************ 00:09:47.398 END TEST raid_read_error_test 00:09:47.398 ************************************ 00:09:47.398 14:35:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.398 14:35:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:47.398 14:35:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:47.398 14:35:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.398 14:35:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.398 ************************************ 00:09:47.398 START TEST raid_write_error_test 00:09:47.398 ************************************ 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yZ1bAgQvZn 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61520 00:09:47.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61520 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61520 ']' 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:47.398 14:35:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.398 [2024-11-04 14:35:46.291463] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:47.398 [2024-11-04 14:35:46.291620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61520 ] 00:09:47.398 [2024-11-04 14:35:46.465612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.657 [2024-11-04 14:35:46.598509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.916 [2024-11-04 14:35:46.815343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.916 [2024-11-04 14:35:46.815400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.522 BaseBdev1_malloc 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.522 true 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.522 [2024-11-04 14:35:47.380651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:48.522 [2024-11-04 14:35:47.380883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.522 [2024-11-04 14:35:47.380937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:48.522 [2024-11-04 14:35:47.380960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.522 [2024-11-04 14:35:47.383799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.522 [2024-11-04 14:35:47.383850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:48.522 BaseBdev1 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.522 BaseBdev2_malloc 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.522 true 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.522 [2024-11-04 14:35:47.450297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:48.522 [2024-11-04 14:35:47.450591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.522 [2024-11-04 14:35:47.450627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:48.522 [2024-11-04 14:35:47.450646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.522 [2024-11-04 14:35:47.453629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.522 BaseBdev2 00:09:48.522 [2024-11-04 14:35:47.453862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.522 [2024-11-04 14:35:47.458550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.522 [2024-11-04 14:35:47.461279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.522 [2024-11-04 14:35:47.461663] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:48.522 [2024-11-04 14:35:47.461821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:48.522 [2024-11-04 14:35:47.462215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:48.522 [2024-11-04 14:35:47.462589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:48.522 [2024-11-04 14:35:47.462731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:48.522 [2024-11-04 14:35:47.463121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.522 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.523 "name": "raid_bdev1", 00:09:48.523 "uuid": "8db54e0d-b3dc-493d-803c-6d1a3a370747", 00:09:48.523 "strip_size_kb": 64, 00:09:48.523 "state": "online", 00:09:48.523 "raid_level": "raid0", 00:09:48.523 "superblock": true, 00:09:48.523 "num_base_bdevs": 2, 00:09:48.523 "num_base_bdevs_discovered": 2, 00:09:48.523 "num_base_bdevs_operational": 2, 00:09:48.523 "base_bdevs_list": [ 00:09:48.523 { 00:09:48.523 "name": "BaseBdev1", 00:09:48.523 "uuid": "340714b3-59ea-5587-b65a-3285fe5772ee", 00:09:48.523 "is_configured": true, 00:09:48.523 "data_offset": 2048, 00:09:48.523 "data_size": 63488 00:09:48.523 }, 00:09:48.523 { 00:09:48.523 "name": "BaseBdev2", 00:09:48.523 "uuid": "29ac7366-1581-578f-8a2d-173a2314771d", 00:09:48.523 "is_configured": true, 00:09:48.523 "data_offset": 2048, 00:09:48.523 "data_size": 63488 00:09:48.523 } 00:09:48.523 ] 00:09:48.523 }' 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.523 14:35:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.090 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.090 14:35:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.090 [2024-11-04 14:35:48.100709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.025 14:35:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.025 14:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.025 "name": "raid_bdev1", 00:09:50.025 "uuid": "8db54e0d-b3dc-493d-803c-6d1a3a370747", 00:09:50.025 "strip_size_kb": 64, 00:09:50.025 "state": "online", 00:09:50.025 "raid_level": "raid0", 00:09:50.025 "superblock": true, 00:09:50.025 "num_base_bdevs": 2, 00:09:50.025 "num_base_bdevs_discovered": 2, 00:09:50.025 "num_base_bdevs_operational": 2, 00:09:50.025 "base_bdevs_list": [ 00:09:50.025 { 00:09:50.025 "name": "BaseBdev1", 00:09:50.025 "uuid": "340714b3-59ea-5587-b65a-3285fe5772ee", 00:09:50.025 "is_configured": true, 00:09:50.025 "data_offset": 2048, 00:09:50.025 "data_size": 63488 00:09:50.025 }, 00:09:50.025 { 00:09:50.025 "name": "BaseBdev2", 00:09:50.025 "uuid": "29ac7366-1581-578f-8a2d-173a2314771d", 00:09:50.025 "is_configured": true, 00:09:50.025 "data_offset": 2048, 00:09:50.025 "data_size": 63488 00:09:50.025 } 00:09:50.025 ] 00:09:50.025 }' 00:09:50.025 14:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.025 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.594 [2024-11-04 14:35:49.516226] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.594 [2024-11-04 14:35:49.516271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:50.594 [2024-11-04 14:35:49.519754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:50.594 { 00:09:50.594 "results": [ 00:09:50.594 { 00:09:50.594 "job": "raid_bdev1", 00:09:50.594 "core_mask": "0x1", 00:09:50.594 "workload": "randrw", 00:09:50.594 "percentage": 50, 00:09:50.594 "status": "finished", 00:09:50.594 "queue_depth": 1, 00:09:50.594 "io_size": 131072, 00:09:50.594 "runtime": 1.413026, 00:09:50.594 "iops": 10726.624987792156, 00:09:50.594 "mibps": 1340.8281234740195, 00:09:50.594 "io_failed": 1, 00:09:50.594 "io_timeout": 0, 00:09:50.594 "avg_latency_us": 130.13276781537502, 00:09:50.594 "min_latency_us": 38.63272727272727, 00:09:50.594 "max_latency_us": 1839.4763636363637 00:09:50.594 } 00:09:50.594 ], 00:09:50.594 "core_count": 1 00:09:50.594 } 00:09:50.594 [2024-11-04 14:35:49.520018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.594 [2024-11-04 14:35:49.520080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:50.594 [2024-11-04 14:35:49.520101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61520 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61520 ']' 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61520 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61520 00:09:50.594 killing process with pid 61520 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61520' 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61520 00:09:50.594 [2024-11-04 14:35:49.556882] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:50.594 14:35:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61520 00:09:50.594 [2024-11-04 14:35:49.684433] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.969 14:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yZ1bAgQvZn 00:09:51.969 14:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:51.969 14:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:51.969 14:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:51.969 14:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:51.969 14:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.969 14:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:51.969 14:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:51.969 00:09:51.969 real 0m4.621s 00:09:51.970 user 0m5.811s 00:09:51.970 sys 0m0.557s 00:09:51.970 ************************************ 00:09:51.970 END TEST raid_write_error_test 00:09:51.970 ************************************ 00:09:51.970 14:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:51.970 14:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.970 14:35:50 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:51.970 14:35:50 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:51.970 14:35:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:51.970 14:35:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:51.970 14:35:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.970 ************************************ 00:09:51.970 START TEST raid_state_function_test 00:09:51.970 ************************************ 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:51.970 Process raid pid: 61658 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61658 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61658' 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61658 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61658 ']' 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:51.970 14:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.970 [2024-11-04 14:35:50.977105] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:51.970 [2024-11-04 14:35:50.977516] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.228 [2024-11-04 14:35:51.164546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.228 [2024-11-04 14:35:51.293183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.539 [2024-11-04 14:35:51.498348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.539 [2024-11-04 14:35:51.498619] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.107 [2024-11-04 14:35:51.966184] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.107 [2024-11-04 14:35:51.966250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.107 [2024-11-04 14:35:51.966268] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.107 [2024-11-04 14:35:51.966285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.107 14:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.107 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.107 "name": "Existed_Raid", 00:09:53.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.107 "strip_size_kb": 64, 00:09:53.107 "state": "configuring", 00:09:53.107 "raid_level": "concat", 00:09:53.107 "superblock": false, 00:09:53.107 "num_base_bdevs": 2, 00:09:53.107 "num_base_bdevs_discovered": 0, 00:09:53.107 "num_base_bdevs_operational": 2, 00:09:53.107 "base_bdevs_list": [ 00:09:53.107 { 00:09:53.107 "name": "BaseBdev1", 00:09:53.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.107 "is_configured": false, 00:09:53.107 "data_offset": 0, 00:09:53.107 "data_size": 0 00:09:53.107 }, 00:09:53.107 { 00:09:53.107 "name": "BaseBdev2", 00:09:53.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.107 "is_configured": false, 00:09:53.107 "data_offset": 0, 00:09:53.107 "data_size": 0 00:09:53.107 } 00:09:53.107 ] 00:09:53.107 }' 00:09:53.107 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.107 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.366 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.366 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.366 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.366 [2024-11-04 14:35:52.482270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.366 [2024-11-04 14:35:52.482497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.625 [2024-11-04 14:35:52.490240] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.625 [2024-11-04 14:35:52.490413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.625 [2024-11-04 14:35:52.490574] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.625 [2024-11-04 14:35:52.490641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.625 [2024-11-04 14:35:52.535012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.625 BaseBdev1 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.625 [ 00:09:53.625 { 00:09:53.625 "name": "BaseBdev1", 00:09:53.625 "aliases": [ 00:09:53.625 "75ba89e9-a15f-45af-9575-16a16efb1cf4" 00:09:53.625 ], 00:09:53.625 "product_name": "Malloc disk", 00:09:53.625 "block_size": 512, 00:09:53.625 "num_blocks": 65536, 00:09:53.625 "uuid": "75ba89e9-a15f-45af-9575-16a16efb1cf4", 00:09:53.625 "assigned_rate_limits": { 00:09:53.625 "rw_ios_per_sec": 0, 00:09:53.625 "rw_mbytes_per_sec": 0, 00:09:53.625 "r_mbytes_per_sec": 0, 00:09:53.625 "w_mbytes_per_sec": 0 00:09:53.625 }, 00:09:53.625 "claimed": true, 00:09:53.625 "claim_type": "exclusive_write", 00:09:53.625 "zoned": false, 00:09:53.625 "supported_io_types": { 00:09:53.625 "read": true, 00:09:53.625 "write": true, 00:09:53.625 "unmap": true, 00:09:53.625 "flush": true, 00:09:53.625 "reset": true, 00:09:53.625 "nvme_admin": false, 00:09:53.625 "nvme_io": false, 00:09:53.625 "nvme_io_md": false, 00:09:53.625 "write_zeroes": true, 00:09:53.625 "zcopy": true, 00:09:53.625 "get_zone_info": false, 00:09:53.625 "zone_management": false, 00:09:53.625 "zone_append": false, 00:09:53.625 "compare": false, 00:09:53.625 "compare_and_write": false, 00:09:53.625 "abort": true, 00:09:53.625 "seek_hole": false, 00:09:53.625 "seek_data": false, 00:09:53.625 "copy": true, 00:09:53.625 "nvme_iov_md": false 00:09:53.625 }, 00:09:53.625 "memory_domains": [ 00:09:53.625 { 00:09:53.625 "dma_device_id": "system", 00:09:53.625 "dma_device_type": 1 00:09:53.625 }, 00:09:53.625 { 00:09:53.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.625 "dma_device_type": 2 00:09:53.625 } 00:09:53.625 ], 00:09:53.625 "driver_specific": {} 00:09:53.625 } 00:09:53.625 ] 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.625 "name": "Existed_Raid", 00:09:53.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.625 "strip_size_kb": 64, 00:09:53.625 "state": "configuring", 00:09:53.625 "raid_level": "concat", 00:09:53.625 "superblock": false, 00:09:53.625 "num_base_bdevs": 2, 00:09:53.625 "num_base_bdevs_discovered": 1, 00:09:53.625 "num_base_bdevs_operational": 2, 00:09:53.625 "base_bdevs_list": [ 00:09:53.625 { 00:09:53.625 "name": "BaseBdev1", 00:09:53.625 "uuid": "75ba89e9-a15f-45af-9575-16a16efb1cf4", 00:09:53.625 "is_configured": true, 00:09:53.625 "data_offset": 0, 00:09:53.625 "data_size": 65536 00:09:53.625 }, 00:09:53.625 { 00:09:53.625 "name": "BaseBdev2", 00:09:53.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.625 "is_configured": false, 00:09:53.625 "data_offset": 0, 00:09:53.625 "data_size": 0 00:09:53.625 } 00:09:53.625 ] 00:09:53.625 }' 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.625 14:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.192 [2024-11-04 14:35:53.079224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.192 [2024-11-04 14:35:53.079301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.192 [2024-11-04 14:35:53.087281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.192 [2024-11-04 14:35:53.089719] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.192 [2024-11-04 14:35:53.089802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.192 "name": "Existed_Raid", 00:09:54.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.192 "strip_size_kb": 64, 00:09:54.192 "state": "configuring", 00:09:54.192 "raid_level": "concat", 00:09:54.192 "superblock": false, 00:09:54.192 "num_base_bdevs": 2, 00:09:54.192 "num_base_bdevs_discovered": 1, 00:09:54.192 "num_base_bdevs_operational": 2, 00:09:54.192 "base_bdevs_list": [ 00:09:54.192 { 00:09:54.192 "name": "BaseBdev1", 00:09:54.192 "uuid": "75ba89e9-a15f-45af-9575-16a16efb1cf4", 00:09:54.192 "is_configured": true, 00:09:54.192 "data_offset": 0, 00:09:54.192 "data_size": 65536 00:09:54.192 }, 00:09:54.192 { 00:09:54.192 "name": "BaseBdev2", 00:09:54.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.192 "is_configured": false, 00:09:54.192 "data_offset": 0, 00:09:54.192 "data_size": 0 00:09:54.192 } 00:09:54.192 ] 00:09:54.192 }' 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.192 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.759 [2024-11-04 14:35:53.651248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.759 [2024-11-04 14:35:53.651474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:54.759 [2024-11-04 14:35:53.651498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:54.759 [2024-11-04 14:35:53.651884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:54.759 [2024-11-04 14:35:53.652183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:54.759 [2024-11-04 14:35:53.652208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:54.759 BaseBdev2 00:09:54.759 [2024-11-04 14:35:53.652531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.759 [ 00:09:54.759 { 00:09:54.759 "name": "BaseBdev2", 00:09:54.759 "aliases": [ 00:09:54.759 "4c76af90-b09d-46ce-a33a-6371916547bb" 00:09:54.759 ], 00:09:54.759 "product_name": "Malloc disk", 00:09:54.759 "block_size": 512, 00:09:54.759 "num_blocks": 65536, 00:09:54.759 "uuid": "4c76af90-b09d-46ce-a33a-6371916547bb", 00:09:54.759 "assigned_rate_limits": { 00:09:54.759 "rw_ios_per_sec": 0, 00:09:54.759 "rw_mbytes_per_sec": 0, 00:09:54.759 "r_mbytes_per_sec": 0, 00:09:54.759 "w_mbytes_per_sec": 0 00:09:54.759 }, 00:09:54.759 "claimed": true, 00:09:54.759 "claim_type": "exclusive_write", 00:09:54.759 "zoned": false, 00:09:54.759 "supported_io_types": { 00:09:54.759 "read": true, 00:09:54.759 "write": true, 00:09:54.759 "unmap": true, 00:09:54.759 "flush": true, 00:09:54.759 "reset": true, 00:09:54.759 "nvme_admin": false, 00:09:54.759 "nvme_io": false, 00:09:54.759 "nvme_io_md": false, 00:09:54.759 "write_zeroes": true, 00:09:54.759 "zcopy": true, 00:09:54.759 "get_zone_info": false, 00:09:54.759 "zone_management": false, 00:09:54.759 "zone_append": false, 00:09:54.759 "compare": false, 00:09:54.759 "compare_and_write": false, 00:09:54.759 "abort": true, 00:09:54.759 "seek_hole": false, 00:09:54.759 "seek_data": false, 00:09:54.759 "copy": true, 00:09:54.759 "nvme_iov_md": false 00:09:54.759 }, 00:09:54.759 "memory_domains": [ 00:09:54.759 { 00:09:54.759 "dma_device_id": "system", 00:09:54.759 "dma_device_type": 1 00:09:54.759 }, 00:09:54.759 { 00:09:54.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.759 "dma_device_type": 2 00:09:54.759 } 00:09:54.759 ], 00:09:54.759 "driver_specific": {} 00:09:54.759 } 00:09:54.759 ] 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.759 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.760 "name": "Existed_Raid", 00:09:54.760 "uuid": "fbfcd4a1-9a27-464a-a79b-fa13066e153f", 00:09:54.760 "strip_size_kb": 64, 00:09:54.760 "state": "online", 00:09:54.760 "raid_level": "concat", 00:09:54.760 "superblock": false, 00:09:54.760 "num_base_bdevs": 2, 00:09:54.760 "num_base_bdevs_discovered": 2, 00:09:54.760 "num_base_bdevs_operational": 2, 00:09:54.760 "base_bdevs_list": [ 00:09:54.760 { 00:09:54.760 "name": "BaseBdev1", 00:09:54.760 "uuid": "75ba89e9-a15f-45af-9575-16a16efb1cf4", 00:09:54.760 "is_configured": true, 00:09:54.760 "data_offset": 0, 00:09:54.760 "data_size": 65536 00:09:54.760 }, 00:09:54.760 { 00:09:54.760 "name": "BaseBdev2", 00:09:54.760 "uuid": "4c76af90-b09d-46ce-a33a-6371916547bb", 00:09:54.760 "is_configured": true, 00:09:54.760 "data_offset": 0, 00:09:54.760 "data_size": 65536 00:09:54.760 } 00:09:54.760 ] 00:09:54.760 }' 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.760 14:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.327 [2024-11-04 14:35:54.223812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.327 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.327 "name": "Existed_Raid", 00:09:55.327 "aliases": [ 00:09:55.327 "fbfcd4a1-9a27-464a-a79b-fa13066e153f" 00:09:55.327 ], 00:09:55.327 "product_name": "Raid Volume", 00:09:55.327 "block_size": 512, 00:09:55.327 "num_blocks": 131072, 00:09:55.327 "uuid": "fbfcd4a1-9a27-464a-a79b-fa13066e153f", 00:09:55.327 "assigned_rate_limits": { 00:09:55.327 "rw_ios_per_sec": 0, 00:09:55.327 "rw_mbytes_per_sec": 0, 00:09:55.328 "r_mbytes_per_sec": 0, 00:09:55.328 "w_mbytes_per_sec": 0 00:09:55.328 }, 00:09:55.328 "claimed": false, 00:09:55.328 "zoned": false, 00:09:55.328 "supported_io_types": { 00:09:55.328 "read": true, 00:09:55.328 "write": true, 00:09:55.328 "unmap": true, 00:09:55.328 "flush": true, 00:09:55.328 "reset": true, 00:09:55.328 "nvme_admin": false, 00:09:55.328 "nvme_io": false, 00:09:55.328 "nvme_io_md": false, 00:09:55.328 "write_zeroes": true, 00:09:55.328 "zcopy": false, 00:09:55.328 "get_zone_info": false, 00:09:55.328 "zone_management": false, 00:09:55.328 "zone_append": false, 00:09:55.328 "compare": false, 00:09:55.328 "compare_and_write": false, 00:09:55.328 "abort": false, 00:09:55.328 "seek_hole": false, 00:09:55.328 "seek_data": false, 00:09:55.328 "copy": false, 00:09:55.328 "nvme_iov_md": false 00:09:55.328 }, 00:09:55.328 "memory_domains": [ 00:09:55.328 { 00:09:55.328 "dma_device_id": "system", 00:09:55.328 "dma_device_type": 1 00:09:55.328 }, 00:09:55.328 { 00:09:55.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.328 "dma_device_type": 2 00:09:55.328 }, 00:09:55.328 { 00:09:55.328 "dma_device_id": "system", 00:09:55.328 "dma_device_type": 1 00:09:55.328 }, 00:09:55.328 { 00:09:55.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.328 "dma_device_type": 2 00:09:55.328 } 00:09:55.328 ], 00:09:55.328 "driver_specific": { 00:09:55.328 "raid": { 00:09:55.328 "uuid": "fbfcd4a1-9a27-464a-a79b-fa13066e153f", 00:09:55.328 "strip_size_kb": 64, 00:09:55.328 "state": "online", 00:09:55.328 "raid_level": "concat", 00:09:55.328 "superblock": false, 00:09:55.328 "num_base_bdevs": 2, 00:09:55.328 "num_base_bdevs_discovered": 2, 00:09:55.328 "num_base_bdevs_operational": 2, 00:09:55.328 "base_bdevs_list": [ 00:09:55.328 { 00:09:55.328 "name": "BaseBdev1", 00:09:55.328 "uuid": "75ba89e9-a15f-45af-9575-16a16efb1cf4", 00:09:55.328 "is_configured": true, 00:09:55.328 "data_offset": 0, 00:09:55.328 "data_size": 65536 00:09:55.328 }, 00:09:55.328 { 00:09:55.328 "name": "BaseBdev2", 00:09:55.328 "uuid": "4c76af90-b09d-46ce-a33a-6371916547bb", 00:09:55.328 "is_configured": true, 00:09:55.328 "data_offset": 0, 00:09:55.328 "data_size": 65536 00:09:55.328 } 00:09:55.328 ] 00:09:55.328 } 00:09:55.328 } 00:09:55.328 }' 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:55.328 BaseBdev2' 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.328 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.587 [2024-11-04 14:35:54.487569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:55.587 [2024-11-04 14:35:54.487758] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.587 [2024-11-04 14:35:54.487976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.587 "name": "Existed_Raid", 00:09:55.587 "uuid": "fbfcd4a1-9a27-464a-a79b-fa13066e153f", 00:09:55.587 "strip_size_kb": 64, 00:09:55.587 "state": "offline", 00:09:55.587 "raid_level": "concat", 00:09:55.587 "superblock": false, 00:09:55.587 "num_base_bdevs": 2, 00:09:55.587 "num_base_bdevs_discovered": 1, 00:09:55.587 "num_base_bdevs_operational": 1, 00:09:55.587 "base_bdevs_list": [ 00:09:55.587 { 00:09:55.587 "name": null, 00:09:55.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.587 "is_configured": false, 00:09:55.587 "data_offset": 0, 00:09:55.587 "data_size": 65536 00:09:55.587 }, 00:09:55.587 { 00:09:55.587 "name": "BaseBdev2", 00:09:55.587 "uuid": "4c76af90-b09d-46ce-a33a-6371916547bb", 00:09:55.587 "is_configured": true, 00:09:55.587 "data_offset": 0, 00:09:55.587 "data_size": 65536 00:09:55.587 } 00:09:55.587 ] 00:09:55.587 }' 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.587 14:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.154 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:56.154 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.154 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.154 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.154 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.154 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.154 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.155 [2024-11-04 14:35:55.121272] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.155 [2024-11-04 14:35:55.121561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61658 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61658 ']' 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61658 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:56.155 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61658 00:09:56.423 killing process with pid 61658 00:09:56.423 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:56.423 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:56.423 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61658' 00:09:56.423 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61658 00:09:56.423 [2024-11-04 14:35:55.300166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.423 14:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61658 00:09:56.423 [2024-11-04 14:35:55.315473] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:57.373 00:09:57.373 real 0m5.458s 00:09:57.373 user 0m8.278s 00:09:57.373 sys 0m0.756s 00:09:57.373 ************************************ 00:09:57.373 END TEST raid_state_function_test 00:09:57.373 ************************************ 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.373 14:35:56 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:57.373 14:35:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:57.373 14:35:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:57.373 14:35:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.373 ************************************ 00:09:57.373 START TEST raid_state_function_test_sb 00:09:57.373 ************************************ 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:57.373 Process raid pid: 61917 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61917 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61917' 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61917 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61917 ']' 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:57.373 14:35:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.634 [2024-11-04 14:35:56.504984] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:09:57.634 [2024-11-04 14:35:56.505440] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.634 [2024-11-04 14:35:56.689196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.892 [2024-11-04 14:35:56.826768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.151 [2024-11-04 14:35:57.039097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.151 [2024-11-04 14:35:57.039151] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.409 [2024-11-04 14:35:57.506525] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.409 [2024-11-04 14:35:57.506741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.409 [2024-11-04 14:35:57.506887] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.409 [2024-11-04 14:35:57.506923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.409 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.410 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.410 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.410 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.410 14:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.410 14:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.410 14:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.731 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.731 "name": "Existed_Raid", 00:09:58.731 "uuid": "3b5002ce-24a6-406b-8706-2dfc310502c6", 00:09:58.731 "strip_size_kb": 64, 00:09:58.731 "state": "configuring", 00:09:58.731 "raid_level": "concat", 00:09:58.731 "superblock": true, 00:09:58.731 "num_base_bdevs": 2, 00:09:58.731 "num_base_bdevs_discovered": 0, 00:09:58.731 "num_base_bdevs_operational": 2, 00:09:58.731 "base_bdevs_list": [ 00:09:58.731 { 00:09:58.731 "name": "BaseBdev1", 00:09:58.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.731 "is_configured": false, 00:09:58.731 "data_offset": 0, 00:09:58.731 "data_size": 0 00:09:58.731 }, 00:09:58.731 { 00:09:58.731 "name": "BaseBdev2", 00:09:58.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.731 "is_configured": false, 00:09:58.731 "data_offset": 0, 00:09:58.731 "data_size": 0 00:09:58.731 } 00:09:58.731 ] 00:09:58.731 }' 00:09:58.731 14:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.731 14:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.036 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.036 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.036 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.036 [2024-11-04 14:35:58.022674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.036 [2024-11-04 14:35:58.022713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:59.036 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.036 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.037 [2024-11-04 14:35:58.030623] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.037 [2024-11-04 14:35:58.030889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.037 [2024-11-04 14:35:58.031107] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.037 [2024-11-04 14:35:58.031177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.037 [2024-11-04 14:35:58.076764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.037 BaseBdev1 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.037 [ 00:09:59.037 { 00:09:59.037 "name": "BaseBdev1", 00:09:59.037 "aliases": [ 00:09:59.037 "d1eeead2-ce40-4a4a-b47e-08e8c2611ab5" 00:09:59.037 ], 00:09:59.037 "product_name": "Malloc disk", 00:09:59.037 "block_size": 512, 00:09:59.037 "num_blocks": 65536, 00:09:59.037 "uuid": "d1eeead2-ce40-4a4a-b47e-08e8c2611ab5", 00:09:59.037 "assigned_rate_limits": { 00:09:59.037 "rw_ios_per_sec": 0, 00:09:59.037 "rw_mbytes_per_sec": 0, 00:09:59.037 "r_mbytes_per_sec": 0, 00:09:59.037 "w_mbytes_per_sec": 0 00:09:59.037 }, 00:09:59.037 "claimed": true, 00:09:59.037 "claim_type": "exclusive_write", 00:09:59.037 "zoned": false, 00:09:59.037 "supported_io_types": { 00:09:59.037 "read": true, 00:09:59.037 "write": true, 00:09:59.037 "unmap": true, 00:09:59.037 "flush": true, 00:09:59.037 "reset": true, 00:09:59.037 "nvme_admin": false, 00:09:59.037 "nvme_io": false, 00:09:59.037 "nvme_io_md": false, 00:09:59.037 "write_zeroes": true, 00:09:59.037 "zcopy": true, 00:09:59.037 "get_zone_info": false, 00:09:59.037 "zone_management": false, 00:09:59.037 "zone_append": false, 00:09:59.037 "compare": false, 00:09:59.037 "compare_and_write": false, 00:09:59.037 "abort": true, 00:09:59.037 "seek_hole": false, 00:09:59.037 "seek_data": false, 00:09:59.037 "copy": true, 00:09:59.037 "nvme_iov_md": false 00:09:59.037 }, 00:09:59.037 "memory_domains": [ 00:09:59.037 { 00:09:59.037 "dma_device_id": "system", 00:09:59.037 "dma_device_type": 1 00:09:59.037 }, 00:09:59.037 { 00:09:59.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.037 "dma_device_type": 2 00:09:59.037 } 00:09:59.037 ], 00:09:59.037 "driver_specific": {} 00:09:59.037 } 00:09:59.037 ] 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.037 "name": "Existed_Raid", 00:09:59.037 "uuid": "0372fcca-ea2b-48ba-9999-87fed2b762b0", 00:09:59.037 "strip_size_kb": 64, 00:09:59.037 "state": "configuring", 00:09:59.037 "raid_level": "concat", 00:09:59.037 "superblock": true, 00:09:59.037 "num_base_bdevs": 2, 00:09:59.037 "num_base_bdevs_discovered": 1, 00:09:59.037 "num_base_bdevs_operational": 2, 00:09:59.037 "base_bdevs_list": [ 00:09:59.037 { 00:09:59.037 "name": "BaseBdev1", 00:09:59.037 "uuid": "d1eeead2-ce40-4a4a-b47e-08e8c2611ab5", 00:09:59.037 "is_configured": true, 00:09:59.037 "data_offset": 2048, 00:09:59.037 "data_size": 63488 00:09:59.037 }, 00:09:59.037 { 00:09:59.037 "name": "BaseBdev2", 00:09:59.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.037 "is_configured": false, 00:09:59.037 "data_offset": 0, 00:09:59.037 "data_size": 0 00:09:59.037 } 00:09:59.037 ] 00:09:59.037 }' 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.037 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.605 [2024-11-04 14:35:58.665067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.605 [2024-11-04 14:35:58.665130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.605 [2024-11-04 14:35:58.677131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.605 [2024-11-04 14:35:58.679676] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.605 [2024-11-04 14:35:58.679904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.605 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.863 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.863 "name": "Existed_Raid", 00:09:59.863 "uuid": "bd6e48a3-1213-4d66-bb28-9b03ae7fe5de", 00:09:59.863 "strip_size_kb": 64, 00:09:59.863 "state": "configuring", 00:09:59.863 "raid_level": "concat", 00:09:59.863 "superblock": true, 00:09:59.863 "num_base_bdevs": 2, 00:09:59.863 "num_base_bdevs_discovered": 1, 00:09:59.863 "num_base_bdevs_operational": 2, 00:09:59.863 "base_bdevs_list": [ 00:09:59.863 { 00:09:59.863 "name": "BaseBdev1", 00:09:59.863 "uuid": "d1eeead2-ce40-4a4a-b47e-08e8c2611ab5", 00:09:59.863 "is_configured": true, 00:09:59.863 "data_offset": 2048, 00:09:59.863 "data_size": 63488 00:09:59.863 }, 00:09:59.863 { 00:09:59.863 "name": "BaseBdev2", 00:09:59.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.863 "is_configured": false, 00:09:59.863 "data_offset": 0, 00:09:59.863 "data_size": 0 00:09:59.863 } 00:09:59.863 ] 00:09:59.863 }' 00:09:59.863 14:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.863 14:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.122 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.122 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.122 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.381 [2024-11-04 14:35:59.250418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.381 BaseBdev2 00:10:00.381 [2024-11-04 14:35:59.251039] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.381 [2024-11-04 14:35:59.251079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:00.381 [2024-11-04 14:35:59.251446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:00.381 [2024-11-04 14:35:59.251661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.381 [2024-11-04 14:35:59.251681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:00.381 [2024-11-04 14:35:59.251945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.381 [ 00:10:00.381 { 00:10:00.381 "name": "BaseBdev2", 00:10:00.381 "aliases": [ 00:10:00.381 "92378104-60f1-49e6-9cba-cd8432760041" 00:10:00.381 ], 00:10:00.381 "product_name": "Malloc disk", 00:10:00.381 "block_size": 512, 00:10:00.381 "num_blocks": 65536, 00:10:00.381 "uuid": "92378104-60f1-49e6-9cba-cd8432760041", 00:10:00.381 "assigned_rate_limits": { 00:10:00.381 "rw_ios_per_sec": 0, 00:10:00.381 "rw_mbytes_per_sec": 0, 00:10:00.381 "r_mbytes_per_sec": 0, 00:10:00.381 "w_mbytes_per_sec": 0 00:10:00.381 }, 00:10:00.381 "claimed": true, 00:10:00.381 "claim_type": "exclusive_write", 00:10:00.381 "zoned": false, 00:10:00.381 "supported_io_types": { 00:10:00.381 "read": true, 00:10:00.381 "write": true, 00:10:00.381 "unmap": true, 00:10:00.381 "flush": true, 00:10:00.381 "reset": true, 00:10:00.381 "nvme_admin": false, 00:10:00.381 "nvme_io": false, 00:10:00.381 "nvme_io_md": false, 00:10:00.381 "write_zeroes": true, 00:10:00.381 "zcopy": true, 00:10:00.381 "get_zone_info": false, 00:10:00.381 "zone_management": false, 00:10:00.381 "zone_append": false, 00:10:00.381 "compare": false, 00:10:00.381 "compare_and_write": false, 00:10:00.381 "abort": true, 00:10:00.381 "seek_hole": false, 00:10:00.381 "seek_data": false, 00:10:00.381 "copy": true, 00:10:00.381 "nvme_iov_md": false 00:10:00.381 }, 00:10:00.381 "memory_domains": [ 00:10:00.381 { 00:10:00.381 "dma_device_id": "system", 00:10:00.381 "dma_device_type": 1 00:10:00.381 }, 00:10:00.381 { 00:10:00.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.381 "dma_device_type": 2 00:10:00.381 } 00:10:00.381 ], 00:10:00.381 "driver_specific": {} 00:10:00.381 } 00:10:00.381 ] 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.381 "name": "Existed_Raid", 00:10:00.381 "uuid": "bd6e48a3-1213-4d66-bb28-9b03ae7fe5de", 00:10:00.381 "strip_size_kb": 64, 00:10:00.381 "state": "online", 00:10:00.381 "raid_level": "concat", 00:10:00.381 "superblock": true, 00:10:00.381 "num_base_bdevs": 2, 00:10:00.381 "num_base_bdevs_discovered": 2, 00:10:00.381 "num_base_bdevs_operational": 2, 00:10:00.381 "base_bdevs_list": [ 00:10:00.381 { 00:10:00.381 "name": "BaseBdev1", 00:10:00.381 "uuid": "d1eeead2-ce40-4a4a-b47e-08e8c2611ab5", 00:10:00.381 "is_configured": true, 00:10:00.381 "data_offset": 2048, 00:10:00.381 "data_size": 63488 00:10:00.381 }, 00:10:00.381 { 00:10:00.381 "name": "BaseBdev2", 00:10:00.381 "uuid": "92378104-60f1-49e6-9cba-cd8432760041", 00:10:00.381 "is_configured": true, 00:10:00.381 "data_offset": 2048, 00:10:00.381 "data_size": 63488 00:10:00.381 } 00:10:00.381 ] 00:10:00.381 }' 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.381 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.018 [2024-11-04 14:35:59.811067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.018 "name": "Existed_Raid", 00:10:01.018 "aliases": [ 00:10:01.018 "bd6e48a3-1213-4d66-bb28-9b03ae7fe5de" 00:10:01.018 ], 00:10:01.018 "product_name": "Raid Volume", 00:10:01.018 "block_size": 512, 00:10:01.018 "num_blocks": 126976, 00:10:01.018 "uuid": "bd6e48a3-1213-4d66-bb28-9b03ae7fe5de", 00:10:01.018 "assigned_rate_limits": { 00:10:01.018 "rw_ios_per_sec": 0, 00:10:01.018 "rw_mbytes_per_sec": 0, 00:10:01.018 "r_mbytes_per_sec": 0, 00:10:01.018 "w_mbytes_per_sec": 0 00:10:01.018 }, 00:10:01.018 "claimed": false, 00:10:01.018 "zoned": false, 00:10:01.018 "supported_io_types": { 00:10:01.018 "read": true, 00:10:01.018 "write": true, 00:10:01.018 "unmap": true, 00:10:01.018 "flush": true, 00:10:01.018 "reset": true, 00:10:01.018 "nvme_admin": false, 00:10:01.018 "nvme_io": false, 00:10:01.018 "nvme_io_md": false, 00:10:01.018 "write_zeroes": true, 00:10:01.018 "zcopy": false, 00:10:01.018 "get_zone_info": false, 00:10:01.018 "zone_management": false, 00:10:01.018 "zone_append": false, 00:10:01.018 "compare": false, 00:10:01.018 "compare_and_write": false, 00:10:01.018 "abort": false, 00:10:01.018 "seek_hole": false, 00:10:01.018 "seek_data": false, 00:10:01.018 "copy": false, 00:10:01.018 "nvme_iov_md": false 00:10:01.018 }, 00:10:01.018 "memory_domains": [ 00:10:01.018 { 00:10:01.018 "dma_device_id": "system", 00:10:01.018 "dma_device_type": 1 00:10:01.018 }, 00:10:01.018 { 00:10:01.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.018 "dma_device_type": 2 00:10:01.018 }, 00:10:01.018 { 00:10:01.018 "dma_device_id": "system", 00:10:01.018 "dma_device_type": 1 00:10:01.018 }, 00:10:01.018 { 00:10:01.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.018 "dma_device_type": 2 00:10:01.018 } 00:10:01.018 ], 00:10:01.018 "driver_specific": { 00:10:01.018 "raid": { 00:10:01.018 "uuid": "bd6e48a3-1213-4d66-bb28-9b03ae7fe5de", 00:10:01.018 "strip_size_kb": 64, 00:10:01.018 "state": "online", 00:10:01.018 "raid_level": "concat", 00:10:01.018 "superblock": true, 00:10:01.018 "num_base_bdevs": 2, 00:10:01.018 "num_base_bdevs_discovered": 2, 00:10:01.018 "num_base_bdevs_operational": 2, 00:10:01.018 "base_bdevs_list": [ 00:10:01.018 { 00:10:01.018 "name": "BaseBdev1", 00:10:01.018 "uuid": "d1eeead2-ce40-4a4a-b47e-08e8c2611ab5", 00:10:01.018 "is_configured": true, 00:10:01.018 "data_offset": 2048, 00:10:01.018 "data_size": 63488 00:10:01.018 }, 00:10:01.018 { 00:10:01.018 "name": "BaseBdev2", 00:10:01.018 "uuid": "92378104-60f1-49e6-9cba-cd8432760041", 00:10:01.018 "is_configured": true, 00:10:01.018 "data_offset": 2048, 00:10:01.018 "data_size": 63488 00:10:01.018 } 00:10:01.018 ] 00:10:01.018 } 00:10:01.018 } 00:10:01.018 }' 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.018 BaseBdev2' 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.018 14:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.018 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.018 [2024-11-04 14:36:00.078914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.018 [2024-11-04 14:36:00.079129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.018 [2024-11-04 14:36:00.079304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.277 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.278 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.278 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.278 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.278 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.278 "name": "Existed_Raid", 00:10:01.278 "uuid": "bd6e48a3-1213-4d66-bb28-9b03ae7fe5de", 00:10:01.278 "strip_size_kb": 64, 00:10:01.278 "state": "offline", 00:10:01.278 "raid_level": "concat", 00:10:01.278 "superblock": true, 00:10:01.278 "num_base_bdevs": 2, 00:10:01.278 "num_base_bdevs_discovered": 1, 00:10:01.278 "num_base_bdevs_operational": 1, 00:10:01.278 "base_bdevs_list": [ 00:10:01.278 { 00:10:01.278 "name": null, 00:10:01.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.278 "is_configured": false, 00:10:01.278 "data_offset": 0, 00:10:01.278 "data_size": 63488 00:10:01.278 }, 00:10:01.278 { 00:10:01.278 "name": "BaseBdev2", 00:10:01.278 "uuid": "92378104-60f1-49e6-9cba-cd8432760041", 00:10:01.278 "is_configured": true, 00:10:01.278 "data_offset": 2048, 00:10:01.278 "data_size": 63488 00:10:01.278 } 00:10:01.278 ] 00:10:01.278 }' 00:10:01.278 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.278 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.844 [2024-11-04 14:36:00.744158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.844 [2024-11-04 14:36:00.744458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.844 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61917 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61917 ']' 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61917 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61917 00:10:01.845 killing process with pid 61917 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61917' 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61917 00:10:01.845 [2024-11-04 14:36:00.928400] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.845 14:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61917 00:10:01.845 [2024-11-04 14:36:00.943772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:03.219 14:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:03.219 00:10:03.219 real 0m5.620s 00:10:03.219 user 0m8.472s 00:10:03.219 sys 0m0.815s 00:10:03.219 ************************************ 00:10:03.219 END TEST raid_state_function_test_sb 00:10:03.219 ************************************ 00:10:03.219 14:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.219 14:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.219 14:36:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:03.219 14:36:02 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:03.219 14:36:02 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.219 14:36:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.219 ************************************ 00:10:03.219 START TEST raid_superblock_test 00:10:03.219 ************************************ 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62169 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62169 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62169 ']' 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:03.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:03.219 14:36:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.219 [2024-11-04 14:36:02.175606] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:10:03.219 [2024-11-04 14:36:02.176107] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62169 ] 00:10:03.477 [2024-11-04 14:36:02.366976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.477 [2024-11-04 14:36:02.525825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.736 [2024-11-04 14:36:02.736161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.736 [2024-11-04 14:36:02.736214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.303 malloc1 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.303 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.304 [2024-11-04 14:36:03.196058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:04.304 [2024-11-04 14:36:03.196300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.304 [2024-11-04 14:36:03.196461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:04.304 [2024-11-04 14:36:03.196580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.304 [2024-11-04 14:36:03.199528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.304 [2024-11-04 14:36:03.199696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:04.304 pt1 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.304 malloc2 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.304 [2024-11-04 14:36:03.254435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:04.304 [2024-11-04 14:36:03.254712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.304 [2024-11-04 14:36:03.254796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:04.304 [2024-11-04 14:36:03.254911] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.304 [2024-11-04 14:36:03.257855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.304 [2024-11-04 14:36:03.258040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:04.304 pt2 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.304 [2024-11-04 14:36:03.262569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:04.304 [2024-11-04 14:36:03.265117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:04.304 [2024-11-04 14:36:03.265456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:04.304 [2024-11-04 14:36:03.265485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:04.304 [2024-11-04 14:36:03.265812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:04.304 [2024-11-04 14:36:03.266133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:04.304 [2024-11-04 14:36:03.266159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:04.304 [2024-11-04 14:36:03.266337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.304 "name": "raid_bdev1", 00:10:04.304 "uuid": "90e8afcd-944e-4ed9-b0b9-74777cd89d72", 00:10:04.304 "strip_size_kb": 64, 00:10:04.304 "state": "online", 00:10:04.304 "raid_level": "concat", 00:10:04.304 "superblock": true, 00:10:04.304 "num_base_bdevs": 2, 00:10:04.304 "num_base_bdevs_discovered": 2, 00:10:04.304 "num_base_bdevs_operational": 2, 00:10:04.304 "base_bdevs_list": [ 00:10:04.304 { 00:10:04.304 "name": "pt1", 00:10:04.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.304 "is_configured": true, 00:10:04.304 "data_offset": 2048, 00:10:04.304 "data_size": 63488 00:10:04.304 }, 00:10:04.304 { 00:10:04.304 "name": "pt2", 00:10:04.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.304 "is_configured": true, 00:10:04.304 "data_offset": 2048, 00:10:04.304 "data_size": 63488 00:10:04.304 } 00:10:04.304 ] 00:10:04.304 }' 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.304 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 [2024-11-04 14:36:03.799055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.871 "name": "raid_bdev1", 00:10:04.871 "aliases": [ 00:10:04.871 "90e8afcd-944e-4ed9-b0b9-74777cd89d72" 00:10:04.871 ], 00:10:04.871 "product_name": "Raid Volume", 00:10:04.871 "block_size": 512, 00:10:04.871 "num_blocks": 126976, 00:10:04.871 "uuid": "90e8afcd-944e-4ed9-b0b9-74777cd89d72", 00:10:04.871 "assigned_rate_limits": { 00:10:04.871 "rw_ios_per_sec": 0, 00:10:04.871 "rw_mbytes_per_sec": 0, 00:10:04.871 "r_mbytes_per_sec": 0, 00:10:04.871 "w_mbytes_per_sec": 0 00:10:04.871 }, 00:10:04.871 "claimed": false, 00:10:04.871 "zoned": false, 00:10:04.871 "supported_io_types": { 00:10:04.871 "read": true, 00:10:04.871 "write": true, 00:10:04.871 "unmap": true, 00:10:04.871 "flush": true, 00:10:04.871 "reset": true, 00:10:04.871 "nvme_admin": false, 00:10:04.871 "nvme_io": false, 00:10:04.871 "nvme_io_md": false, 00:10:04.871 "write_zeroes": true, 00:10:04.871 "zcopy": false, 00:10:04.871 "get_zone_info": false, 00:10:04.871 "zone_management": false, 00:10:04.871 "zone_append": false, 00:10:04.871 "compare": false, 00:10:04.871 "compare_and_write": false, 00:10:04.871 "abort": false, 00:10:04.871 "seek_hole": false, 00:10:04.871 "seek_data": false, 00:10:04.871 "copy": false, 00:10:04.871 "nvme_iov_md": false 00:10:04.871 }, 00:10:04.871 "memory_domains": [ 00:10:04.871 { 00:10:04.871 "dma_device_id": "system", 00:10:04.871 "dma_device_type": 1 00:10:04.871 }, 00:10:04.871 { 00:10:04.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.871 "dma_device_type": 2 00:10:04.871 }, 00:10:04.871 { 00:10:04.871 "dma_device_id": "system", 00:10:04.871 "dma_device_type": 1 00:10:04.871 }, 00:10:04.871 { 00:10:04.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.871 "dma_device_type": 2 00:10:04.871 } 00:10:04.871 ], 00:10:04.871 "driver_specific": { 00:10:04.871 "raid": { 00:10:04.871 "uuid": "90e8afcd-944e-4ed9-b0b9-74777cd89d72", 00:10:04.871 "strip_size_kb": 64, 00:10:04.871 "state": "online", 00:10:04.871 "raid_level": "concat", 00:10:04.871 "superblock": true, 00:10:04.871 "num_base_bdevs": 2, 00:10:04.871 "num_base_bdevs_discovered": 2, 00:10:04.871 "num_base_bdevs_operational": 2, 00:10:04.871 "base_bdevs_list": [ 00:10:04.871 { 00:10:04.871 "name": "pt1", 00:10:04.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:04.871 "is_configured": true, 00:10:04.871 "data_offset": 2048, 00:10:04.871 "data_size": 63488 00:10:04.871 }, 00:10:04.871 { 00:10:04.871 "name": "pt2", 00:10:04.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:04.871 "is_configured": true, 00:10:04.871 "data_offset": 2048, 00:10:04.871 "data_size": 63488 00:10:04.871 } 00:10:04.871 ] 00:10:04.871 } 00:10:04.871 } 00:10:04.871 }' 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:04.871 pt2' 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.871 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.130 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.130 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.130 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.130 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:05.130 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.130 14:36:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.130 14:36:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.130 [2024-11-04 14:36:04.059064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=90e8afcd-944e-4ed9-b0b9-74777cd89d72 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 90e8afcd-944e-4ed9-b0b9-74777cd89d72 ']' 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.130 [2024-11-04 14:36:04.118766] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.130 [2024-11-04 14:36:04.118797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.130 [2024-11-04 14:36:04.118897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.130 [2024-11-04 14:36:04.118961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.130 [2024-11-04 14:36:04.118997] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.130 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.130 [2024-11-04 14:36:04.250803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:05.389 [2024-11-04 14:36:04.253334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:05.389 [2024-11-04 14:36:04.253543] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:05.389 [2024-11-04 14:36:04.253628] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:05.389 [2024-11-04 14:36:04.253657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.390 [2024-11-04 14:36:04.253674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:05.390 request: 00:10:05.390 { 00:10:05.390 "name": "raid_bdev1", 00:10:05.390 "raid_level": "concat", 00:10:05.390 "base_bdevs": [ 00:10:05.390 "malloc1", 00:10:05.390 "malloc2" 00:10:05.390 ], 00:10:05.390 "strip_size_kb": 64, 00:10:05.390 "superblock": false, 00:10:05.390 "method": "bdev_raid_create", 00:10:05.390 "req_id": 1 00:10:05.390 } 00:10:05.390 Got JSON-RPC error response 00:10:05.390 response: 00:10:05.390 { 00:10:05.390 "code": -17, 00:10:05.390 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:05.390 } 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.390 [2024-11-04 14:36:04.310779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:05.390 [2024-11-04 14:36:04.310964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.390 [2024-11-04 14:36:04.311034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:05.390 [2024-11-04 14:36:04.311158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.390 [2024-11-04 14:36:04.314157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.390 pt1 00:10:05.390 [2024-11-04 14:36:04.314321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:05.390 [2024-11-04 14:36:04.314425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:05.390 [2024-11-04 14:36:04.314508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.390 "name": "raid_bdev1", 00:10:05.390 "uuid": "90e8afcd-944e-4ed9-b0b9-74777cd89d72", 00:10:05.390 "strip_size_kb": 64, 00:10:05.390 "state": "configuring", 00:10:05.390 "raid_level": "concat", 00:10:05.390 "superblock": true, 00:10:05.390 "num_base_bdevs": 2, 00:10:05.390 "num_base_bdevs_discovered": 1, 00:10:05.390 "num_base_bdevs_operational": 2, 00:10:05.390 "base_bdevs_list": [ 00:10:05.390 { 00:10:05.390 "name": "pt1", 00:10:05.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.390 "is_configured": true, 00:10:05.390 "data_offset": 2048, 00:10:05.390 "data_size": 63488 00:10:05.390 }, 00:10:05.390 { 00:10:05.390 "name": null, 00:10:05.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.390 "is_configured": false, 00:10:05.390 "data_offset": 2048, 00:10:05.390 "data_size": 63488 00:10:05.390 } 00:10:05.390 ] 00:10:05.390 }' 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.390 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.957 [2024-11-04 14:36:04.823064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:05.957 [2024-11-04 14:36:04.823311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.957 [2024-11-04 14:36:04.823385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:05.957 [2024-11-04 14:36:04.823659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.957 [2024-11-04 14:36:04.824360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.957 [2024-11-04 14:36:04.824415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:05.957 [2024-11-04 14:36:04.824510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:05.957 [2024-11-04 14:36:04.824561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:05.957 [2024-11-04 14:36:04.824729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:05.957 [2024-11-04 14:36:04.824751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:05.957 [2024-11-04 14:36:04.825055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:05.957 [2024-11-04 14:36:04.825267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:05.957 [2024-11-04 14:36:04.825283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:05.957 [2024-11-04 14:36:04.825465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.957 pt2 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.957 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.958 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.958 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.958 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.958 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.958 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.958 "name": "raid_bdev1", 00:10:05.958 "uuid": "90e8afcd-944e-4ed9-b0b9-74777cd89d72", 00:10:05.958 "strip_size_kb": 64, 00:10:05.958 "state": "online", 00:10:05.958 "raid_level": "concat", 00:10:05.958 "superblock": true, 00:10:05.958 "num_base_bdevs": 2, 00:10:05.958 "num_base_bdevs_discovered": 2, 00:10:05.958 "num_base_bdevs_operational": 2, 00:10:05.958 "base_bdevs_list": [ 00:10:05.958 { 00:10:05.958 "name": "pt1", 00:10:05.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.958 "is_configured": true, 00:10:05.958 "data_offset": 2048, 00:10:05.958 "data_size": 63488 00:10:05.958 }, 00:10:05.958 { 00:10:05.958 "name": "pt2", 00:10:05.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.958 "is_configured": true, 00:10:05.958 "data_offset": 2048, 00:10:05.958 "data_size": 63488 00:10:05.958 } 00:10:05.958 ] 00:10:05.958 }' 00:10:05.958 14:36:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.958 14:36:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.525 [2024-11-04 14:36:05.347543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.525 "name": "raid_bdev1", 00:10:06.525 "aliases": [ 00:10:06.525 "90e8afcd-944e-4ed9-b0b9-74777cd89d72" 00:10:06.525 ], 00:10:06.525 "product_name": "Raid Volume", 00:10:06.525 "block_size": 512, 00:10:06.525 "num_blocks": 126976, 00:10:06.525 "uuid": "90e8afcd-944e-4ed9-b0b9-74777cd89d72", 00:10:06.525 "assigned_rate_limits": { 00:10:06.525 "rw_ios_per_sec": 0, 00:10:06.525 "rw_mbytes_per_sec": 0, 00:10:06.525 "r_mbytes_per_sec": 0, 00:10:06.525 "w_mbytes_per_sec": 0 00:10:06.525 }, 00:10:06.525 "claimed": false, 00:10:06.525 "zoned": false, 00:10:06.525 "supported_io_types": { 00:10:06.525 "read": true, 00:10:06.525 "write": true, 00:10:06.525 "unmap": true, 00:10:06.525 "flush": true, 00:10:06.525 "reset": true, 00:10:06.525 "nvme_admin": false, 00:10:06.525 "nvme_io": false, 00:10:06.525 "nvme_io_md": false, 00:10:06.525 "write_zeroes": true, 00:10:06.525 "zcopy": false, 00:10:06.525 "get_zone_info": false, 00:10:06.525 "zone_management": false, 00:10:06.525 "zone_append": false, 00:10:06.525 "compare": false, 00:10:06.525 "compare_and_write": false, 00:10:06.525 "abort": false, 00:10:06.525 "seek_hole": false, 00:10:06.525 "seek_data": false, 00:10:06.525 "copy": false, 00:10:06.525 "nvme_iov_md": false 00:10:06.525 }, 00:10:06.525 "memory_domains": [ 00:10:06.525 { 00:10:06.525 "dma_device_id": "system", 00:10:06.525 "dma_device_type": 1 00:10:06.525 }, 00:10:06.525 { 00:10:06.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.525 "dma_device_type": 2 00:10:06.525 }, 00:10:06.525 { 00:10:06.525 "dma_device_id": "system", 00:10:06.525 "dma_device_type": 1 00:10:06.525 }, 00:10:06.525 { 00:10:06.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.525 "dma_device_type": 2 00:10:06.525 } 00:10:06.525 ], 00:10:06.525 "driver_specific": { 00:10:06.525 "raid": { 00:10:06.525 "uuid": "90e8afcd-944e-4ed9-b0b9-74777cd89d72", 00:10:06.525 "strip_size_kb": 64, 00:10:06.525 "state": "online", 00:10:06.525 "raid_level": "concat", 00:10:06.525 "superblock": true, 00:10:06.525 "num_base_bdevs": 2, 00:10:06.525 "num_base_bdevs_discovered": 2, 00:10:06.525 "num_base_bdevs_operational": 2, 00:10:06.525 "base_bdevs_list": [ 00:10:06.525 { 00:10:06.525 "name": "pt1", 00:10:06.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.525 "is_configured": true, 00:10:06.525 "data_offset": 2048, 00:10:06.525 "data_size": 63488 00:10:06.525 }, 00:10:06.525 { 00:10:06.525 "name": "pt2", 00:10:06.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.525 "is_configured": true, 00:10:06.525 "data_offset": 2048, 00:10:06.525 "data_size": 63488 00:10:06.525 } 00:10:06.525 ] 00:10:06.525 } 00:10:06.525 } 00:10:06.525 }' 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:06.525 pt2' 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:06.525 [2024-11-04 14:36:05.615622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.525 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.784 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 90e8afcd-944e-4ed9-b0b9-74777cd89d72 '!=' 90e8afcd-944e-4ed9-b0b9-74777cd89d72 ']' 00:10:06.784 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:06.784 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:06.784 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:06.784 14:36:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62169 00:10:06.784 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62169 ']' 00:10:06.784 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62169 00:10:06.784 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:06.785 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:06.785 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62169 00:10:06.785 killing process with pid 62169 00:10:06.785 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:06.785 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:06.785 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62169' 00:10:06.785 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62169 00:10:06.785 [2024-11-04 14:36:05.692558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.785 [2024-11-04 14:36:05.692651] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.785 14:36:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62169 00:10:06.785 [2024-11-04 14:36:05.692728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.785 [2024-11-04 14:36:05.692748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:06.785 [2024-11-04 14:36:05.878052] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.160 14:36:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:08.160 00:10:08.160 real 0m4.860s 00:10:08.160 user 0m7.156s 00:10:08.160 sys 0m0.711s 00:10:08.160 14:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.160 14:36:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.160 ************************************ 00:10:08.160 END TEST raid_superblock_test 00:10:08.160 ************************************ 00:10:08.160 14:36:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:10:08.160 14:36:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:08.160 14:36:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.160 14:36:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.160 ************************************ 00:10:08.160 START TEST raid_read_error_test 00:10:08.160 ************************************ 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:08.160 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.crSIW2nvOR 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62386 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62386 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62386 ']' 00:10:08.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:08.161 14:36:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.161 [2024-11-04 14:36:07.110051] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:10:08.161 [2024-11-04 14:36:07.110911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62386 ] 00:10:08.419 [2024-11-04 14:36:07.302645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.419 [2024-11-04 14:36:07.458439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.678 [2024-11-04 14:36:07.667586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.678 [2024-11-04 14:36:07.667671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.246 BaseBdev1_malloc 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.246 true 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.246 [2024-11-04 14:36:08.125366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:09.246 [2024-11-04 14:36:08.125571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.246 [2024-11-04 14:36:08.125609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:09.246 [2024-11-04 14:36:08.125628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.246 [2024-11-04 14:36:08.128575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.246 [2024-11-04 14:36:08.128634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:09.246 BaseBdev1 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.246 BaseBdev2_malloc 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.246 true 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.246 [2024-11-04 14:36:08.190356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:09.246 [2024-11-04 14:36:08.190629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.246 [2024-11-04 14:36:08.190662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:09.246 [2024-11-04 14:36:08.190679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.246 [2024-11-04 14:36:08.193461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.246 [2024-11-04 14:36:08.193506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:09.246 BaseBdev2 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.246 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.246 [2024-11-04 14:36:08.198446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.246 [2024-11-04 14:36:08.200859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:09.246 [2024-11-04 14:36:08.201166] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:09.246 [2024-11-04 14:36:08.201189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:09.246 [2024-11-04 14:36:08.201449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:09.246 [2024-11-04 14:36:08.201677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:09.247 [2024-11-04 14:36:08.201696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:09.247 [2024-11-04 14:36:08.201873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.247 "name": "raid_bdev1", 00:10:09.247 "uuid": "ba0aee49-a578-4972-9332-bad7c2ad9ba8", 00:10:09.247 "strip_size_kb": 64, 00:10:09.247 "state": "online", 00:10:09.247 "raid_level": "concat", 00:10:09.247 "superblock": true, 00:10:09.247 "num_base_bdevs": 2, 00:10:09.247 "num_base_bdevs_discovered": 2, 00:10:09.247 "num_base_bdevs_operational": 2, 00:10:09.247 "base_bdevs_list": [ 00:10:09.247 { 00:10:09.247 "name": "BaseBdev1", 00:10:09.247 "uuid": "73be9c5e-c787-5667-89c2-0f50c7a457c6", 00:10:09.247 "is_configured": true, 00:10:09.247 "data_offset": 2048, 00:10:09.247 "data_size": 63488 00:10:09.247 }, 00:10:09.247 { 00:10:09.247 "name": "BaseBdev2", 00:10:09.247 "uuid": "995b201d-4260-5122-bcd6-3de45a0ca7f5", 00:10:09.247 "is_configured": true, 00:10:09.247 "data_offset": 2048, 00:10:09.247 "data_size": 63488 00:10:09.247 } 00:10:09.247 ] 00:10:09.247 }' 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.247 14:36:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.814 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:09.814 14:36:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:09.814 [2024-11-04 14:36:08.900358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:10.747 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:10.747 14:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.748 "name": "raid_bdev1", 00:10:10.748 "uuid": "ba0aee49-a578-4972-9332-bad7c2ad9ba8", 00:10:10.748 "strip_size_kb": 64, 00:10:10.748 "state": "online", 00:10:10.748 "raid_level": "concat", 00:10:10.748 "superblock": true, 00:10:10.748 "num_base_bdevs": 2, 00:10:10.748 "num_base_bdevs_discovered": 2, 00:10:10.748 "num_base_bdevs_operational": 2, 00:10:10.748 "base_bdevs_list": [ 00:10:10.748 { 00:10:10.748 "name": "BaseBdev1", 00:10:10.748 "uuid": "73be9c5e-c787-5667-89c2-0f50c7a457c6", 00:10:10.748 "is_configured": true, 00:10:10.748 "data_offset": 2048, 00:10:10.748 "data_size": 63488 00:10:10.748 }, 00:10:10.748 { 00:10:10.748 "name": "BaseBdev2", 00:10:10.748 "uuid": "995b201d-4260-5122-bcd6-3de45a0ca7f5", 00:10:10.748 "is_configured": true, 00:10:10.748 "data_offset": 2048, 00:10:10.748 "data_size": 63488 00:10:10.748 } 00:10:10.748 ] 00:10:10.748 }' 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.748 14:36:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.314 [2024-11-04 14:36:10.299961] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:11.314 [2024-11-04 14:36:10.300153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.314 { 00:10:11.314 "results": [ 00:10:11.314 { 00:10:11.314 "job": "raid_bdev1", 00:10:11.314 "core_mask": "0x1", 00:10:11.314 "workload": "randrw", 00:10:11.314 "percentage": 50, 00:10:11.314 "status": "finished", 00:10:11.314 "queue_depth": 1, 00:10:11.314 "io_size": 131072, 00:10:11.314 "runtime": 1.397202, 00:10:11.314 "iops": 10542.498507731882, 00:10:11.314 "mibps": 1317.8123134664852, 00:10:11.314 "io_failed": 1, 00:10:11.314 "io_timeout": 0, 00:10:11.314 "avg_latency_us": 131.74843996272548, 00:10:11.314 "min_latency_us": 37.70181818181818, 00:10:11.314 "max_latency_us": 1980.9745454545455 00:10:11.314 } 00:10:11.314 ], 00:10:11.314 "core_count": 1 00:10:11.314 } 00:10:11.314 [2024-11-04 14:36:10.303663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.314 [2024-11-04 14:36:10.303719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.314 [2024-11-04 14:36:10.303789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.314 [2024-11-04 14:36:10.303809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62386 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62386 ']' 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62386 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62386 00:10:11.314 killing process with pid 62386 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62386' 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62386 00:10:11.314 [2024-11-04 14:36:10.341665] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.314 14:36:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62386 00:10:11.572 [2024-11-04 14:36:10.468057] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.507 14:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:12.507 14:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.crSIW2nvOR 00:10:12.507 14:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:12.507 14:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:12.507 14:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:12.507 ************************************ 00:10:12.507 END TEST raid_read_error_test 00:10:12.507 ************************************ 00:10:12.507 14:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.507 14:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.507 14:36:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:12.507 00:10:12.507 real 0m4.600s 00:10:12.507 user 0m5.790s 00:10:12.507 sys 0m0.575s 00:10:12.507 14:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:12.507 14:36:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.507 14:36:11 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:10:12.507 14:36:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:12.507 14:36:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:12.507 14:36:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.766 ************************************ 00:10:12.766 START TEST raid_write_error_test 00:10:12.766 ************************************ 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.u4wDdCkWKx 00:10:12.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62532 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62532 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62532 ']' 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:12.766 14:36:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.766 [2024-11-04 14:36:11.740021] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:10:12.766 [2024-11-04 14:36:11.740180] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62532 ] 00:10:13.025 [2024-11-04 14:36:11.914026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.025 [2024-11-04 14:36:12.044938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.283 [2024-11-04 14:36:12.250980] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.283 [2024-11-04 14:36:12.251298] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.850 BaseBdev1_malloc 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.850 true 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.850 [2024-11-04 14:36:12.825326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:13.850 [2024-11-04 14:36:12.825537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.850 [2024-11-04 14:36:12.825612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:13.850 [2024-11-04 14:36:12.825836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.850 [2024-11-04 14:36:12.828744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.850 [2024-11-04 14:36:12.828829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:13.850 BaseBdev1 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.850 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.850 BaseBdev2_malloc 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 true 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 [2024-11-04 14:36:12.890469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:13.851 [2024-11-04 14:36:12.890541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.851 [2024-11-04 14:36:12.890568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:13.851 [2024-11-04 14:36:12.890585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.851 [2024-11-04 14:36:12.893329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.851 [2024-11-04 14:36:12.893511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:13.851 BaseBdev2 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 [2024-11-04 14:36:12.898536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.851 [2024-11-04 14:36:12.901009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.851 [2024-11-04 14:36:12.901264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:13.851 [2024-11-04 14:36:12.901288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:13.851 [2024-11-04 14:36:12.901574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:13.851 [2024-11-04 14:36:12.901798] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:13.851 [2024-11-04 14:36:12.901827] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:13.851 [2024-11-04 14:36:12.902055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.851 "name": "raid_bdev1", 00:10:13.851 "uuid": "ee14b3bb-4dee-4b9e-9a7a-7402ddbd8d35", 00:10:13.851 "strip_size_kb": 64, 00:10:13.851 "state": "online", 00:10:13.851 "raid_level": "concat", 00:10:13.851 "superblock": true, 00:10:13.851 "num_base_bdevs": 2, 00:10:13.851 "num_base_bdevs_discovered": 2, 00:10:13.851 "num_base_bdevs_operational": 2, 00:10:13.851 "base_bdevs_list": [ 00:10:13.851 { 00:10:13.851 "name": "BaseBdev1", 00:10:13.851 "uuid": "38eca418-f7fd-5bc5-91cb-312c98214d2b", 00:10:13.851 "is_configured": true, 00:10:13.851 "data_offset": 2048, 00:10:13.851 "data_size": 63488 00:10:13.851 }, 00:10:13.851 { 00:10:13.851 "name": "BaseBdev2", 00:10:13.851 "uuid": "6934329a-c363-5987-a868-579983dfc3ce", 00:10:13.851 "is_configured": true, 00:10:13.851 "data_offset": 2048, 00:10:13.851 "data_size": 63488 00:10:13.851 } 00:10:13.851 ] 00:10:13.851 }' 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.851 14:36:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.419 14:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:14.419 14:36:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:14.678 [2024-11-04 14:36:13.552129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:15.612 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:15.612 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.612 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.612 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.612 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:15.612 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:15.612 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:15.612 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:15.612 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.613 "name": "raid_bdev1", 00:10:15.613 "uuid": "ee14b3bb-4dee-4b9e-9a7a-7402ddbd8d35", 00:10:15.613 "strip_size_kb": 64, 00:10:15.613 "state": "online", 00:10:15.613 "raid_level": "concat", 00:10:15.613 "superblock": true, 00:10:15.613 "num_base_bdevs": 2, 00:10:15.613 "num_base_bdevs_discovered": 2, 00:10:15.613 "num_base_bdevs_operational": 2, 00:10:15.613 "base_bdevs_list": [ 00:10:15.613 { 00:10:15.613 "name": "BaseBdev1", 00:10:15.613 "uuid": "38eca418-f7fd-5bc5-91cb-312c98214d2b", 00:10:15.613 "is_configured": true, 00:10:15.613 "data_offset": 2048, 00:10:15.613 "data_size": 63488 00:10:15.613 }, 00:10:15.613 { 00:10:15.613 "name": "BaseBdev2", 00:10:15.613 "uuid": "6934329a-c363-5987-a868-579983dfc3ce", 00:10:15.613 "is_configured": true, 00:10:15.613 "data_offset": 2048, 00:10:15.613 "data_size": 63488 00:10:15.613 } 00:10:15.613 ] 00:10:15.613 }' 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.613 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.872 [2024-11-04 14:36:14.942682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.872 [2024-11-04 14:36:14.942723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.872 [2024-11-04 14:36:14.946308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.872 { 00:10:15.872 "results": [ 00:10:15.872 { 00:10:15.872 "job": "raid_bdev1", 00:10:15.872 "core_mask": "0x1", 00:10:15.872 "workload": "randrw", 00:10:15.872 "percentage": 50, 00:10:15.872 "status": "finished", 00:10:15.872 "queue_depth": 1, 00:10:15.872 "io_size": 131072, 00:10:15.872 "runtime": 1.387997, 00:10:15.872 "iops": 10879.70651233396, 00:10:15.872 "mibps": 1359.963314041745, 00:10:15.872 "io_failed": 1, 00:10:15.872 "io_timeout": 0, 00:10:15.872 "avg_latency_us": 128.1767183154549, 00:10:15.872 "min_latency_us": 40.96, 00:10:15.872 "max_latency_us": 1899.0545454545454 00:10:15.872 } 00:10:15.872 ], 00:10:15.872 "core_count": 1 00:10:15.872 } 00:10:15.872 [2024-11-04 14:36:14.946519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.872 [2024-11-04 14:36:14.946576] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.872 [2024-11-04 14:36:14.946600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62532 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62532 ']' 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62532 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62532 00:10:15.872 killing process with pid 62532 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62532' 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62532 00:10:15.872 [2024-11-04 14:36:14.976797] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.872 14:36:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62532 00:10:16.130 [2024-11-04 14:36:15.097162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.505 14:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.u4wDdCkWKx 00:10:17.505 14:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:17.505 14:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:17.505 14:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:17.505 14:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:17.505 14:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:17.505 14:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:17.505 14:36:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:17.505 00:10:17.505 real 0m4.569s 00:10:17.505 user 0m5.767s 00:10:17.505 sys 0m0.530s 00:10:17.505 ************************************ 00:10:17.505 END TEST raid_write_error_test 00:10:17.505 ************************************ 00:10:17.505 14:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:17.505 14:36:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.505 14:36:16 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:17.505 14:36:16 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:17.505 14:36:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:17.505 14:36:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:17.505 14:36:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.505 ************************************ 00:10:17.505 START TEST raid_state_function_test 00:10:17.505 ************************************ 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:17.505 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62675 00:10:17.506 Process raid pid: 62675 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62675' 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62675 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62675 ']' 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:17.506 14:36:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.506 [2024-11-04 14:36:16.374169] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:10:17.506 [2024-11-04 14:36:16.374365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.506 [2024-11-04 14:36:16.570278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.765 [2024-11-04 14:36:16.733047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.024 [2024-11-04 14:36:16.966364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.024 [2024-11-04 14:36:16.966675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.282 [2024-11-04 14:36:17.388352] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.282 [2024-11-04 14:36:17.388448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.282 [2024-11-04 14:36:17.388481] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.282 [2024-11-04 14:36:17.388497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.282 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.283 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.283 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.283 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.283 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.283 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.283 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.283 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.283 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.283 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.593 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.593 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.593 "name": "Existed_Raid", 00:10:18.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.593 "strip_size_kb": 0, 00:10:18.593 "state": "configuring", 00:10:18.593 "raid_level": "raid1", 00:10:18.593 "superblock": false, 00:10:18.593 "num_base_bdevs": 2, 00:10:18.593 "num_base_bdevs_discovered": 0, 00:10:18.593 "num_base_bdevs_operational": 2, 00:10:18.593 "base_bdevs_list": [ 00:10:18.593 { 00:10:18.593 "name": "BaseBdev1", 00:10:18.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.593 "is_configured": false, 00:10:18.593 "data_offset": 0, 00:10:18.593 "data_size": 0 00:10:18.593 }, 00:10:18.593 { 00:10:18.593 "name": "BaseBdev2", 00:10:18.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.593 "is_configured": false, 00:10:18.593 "data_offset": 0, 00:10:18.593 "data_size": 0 00:10:18.593 } 00:10:18.593 ] 00:10:18.593 }' 00:10:18.593 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.593 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.876 [2024-11-04 14:36:17.892459] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:18.876 [2024-11-04 14:36:17.892673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.876 [2024-11-04 14:36:17.900435] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:18.876 [2024-11-04 14:36:17.900602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:18.876 [2024-11-04 14:36:17.900723] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:18.876 [2024-11-04 14:36:17.900785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.876 [2024-11-04 14:36:17.946135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.876 BaseBdev1 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.876 [ 00:10:18.876 { 00:10:18.876 "name": "BaseBdev1", 00:10:18.876 "aliases": [ 00:10:18.876 "c2102492-37da-464f-9d54-7e165dea72b3" 00:10:18.876 ], 00:10:18.876 "product_name": "Malloc disk", 00:10:18.876 "block_size": 512, 00:10:18.876 "num_blocks": 65536, 00:10:18.876 "uuid": "c2102492-37da-464f-9d54-7e165dea72b3", 00:10:18.876 "assigned_rate_limits": { 00:10:18.876 "rw_ios_per_sec": 0, 00:10:18.876 "rw_mbytes_per_sec": 0, 00:10:18.876 "r_mbytes_per_sec": 0, 00:10:18.876 "w_mbytes_per_sec": 0 00:10:18.876 }, 00:10:18.876 "claimed": true, 00:10:18.876 "claim_type": "exclusive_write", 00:10:18.876 "zoned": false, 00:10:18.876 "supported_io_types": { 00:10:18.876 "read": true, 00:10:18.876 "write": true, 00:10:18.876 "unmap": true, 00:10:18.876 "flush": true, 00:10:18.876 "reset": true, 00:10:18.876 "nvme_admin": false, 00:10:18.876 "nvme_io": false, 00:10:18.876 "nvme_io_md": false, 00:10:18.876 "write_zeroes": true, 00:10:18.876 "zcopy": true, 00:10:18.876 "get_zone_info": false, 00:10:18.876 "zone_management": false, 00:10:18.876 "zone_append": false, 00:10:18.876 "compare": false, 00:10:18.876 "compare_and_write": false, 00:10:18.876 "abort": true, 00:10:18.876 "seek_hole": false, 00:10:18.876 "seek_data": false, 00:10:18.876 "copy": true, 00:10:18.876 "nvme_iov_md": false 00:10:18.876 }, 00:10:18.876 "memory_domains": [ 00:10:18.876 { 00:10:18.876 "dma_device_id": "system", 00:10:18.876 "dma_device_type": 1 00:10:18.876 }, 00:10:18.876 { 00:10:18.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.876 "dma_device_type": 2 00:10:18.876 } 00:10:18.876 ], 00:10:18.876 "driver_specific": {} 00:10:18.876 } 00:10:18.876 ] 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.876 14:36:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.877 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.877 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.877 14:36:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.135 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.135 "name": "Existed_Raid", 00:10:19.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.135 "strip_size_kb": 0, 00:10:19.135 "state": "configuring", 00:10:19.135 "raid_level": "raid1", 00:10:19.135 "superblock": false, 00:10:19.135 "num_base_bdevs": 2, 00:10:19.135 "num_base_bdevs_discovered": 1, 00:10:19.135 "num_base_bdevs_operational": 2, 00:10:19.135 "base_bdevs_list": [ 00:10:19.135 { 00:10:19.135 "name": "BaseBdev1", 00:10:19.135 "uuid": "c2102492-37da-464f-9d54-7e165dea72b3", 00:10:19.135 "is_configured": true, 00:10:19.135 "data_offset": 0, 00:10:19.135 "data_size": 65536 00:10:19.135 }, 00:10:19.135 { 00:10:19.135 "name": "BaseBdev2", 00:10:19.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.135 "is_configured": false, 00:10:19.135 "data_offset": 0, 00:10:19.135 "data_size": 0 00:10:19.135 } 00:10:19.135 ] 00:10:19.135 }' 00:10:19.135 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.135 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.394 [2024-11-04 14:36:18.474346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:19.394 [2024-11-04 14:36:18.474557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.394 [2024-11-04 14:36:18.482424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.394 [2024-11-04 14:36:18.485059] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:19.394 [2024-11-04 14:36:18.485258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.394 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.395 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.395 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.395 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.395 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.395 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.395 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.395 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.395 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.395 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.395 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.654 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.654 "name": "Existed_Raid", 00:10:19.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.654 "strip_size_kb": 0, 00:10:19.654 "state": "configuring", 00:10:19.654 "raid_level": "raid1", 00:10:19.654 "superblock": false, 00:10:19.654 "num_base_bdevs": 2, 00:10:19.654 "num_base_bdevs_discovered": 1, 00:10:19.654 "num_base_bdevs_operational": 2, 00:10:19.654 "base_bdevs_list": [ 00:10:19.654 { 00:10:19.654 "name": "BaseBdev1", 00:10:19.654 "uuid": "c2102492-37da-464f-9d54-7e165dea72b3", 00:10:19.654 "is_configured": true, 00:10:19.654 "data_offset": 0, 00:10:19.654 "data_size": 65536 00:10:19.654 }, 00:10:19.654 { 00:10:19.654 "name": "BaseBdev2", 00:10:19.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.654 "is_configured": false, 00:10:19.654 "data_offset": 0, 00:10:19.654 "data_size": 0 00:10:19.654 } 00:10:19.654 ] 00:10:19.654 }' 00:10:19.654 14:36:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.654 14:36:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.913 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:19.913 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.913 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.288 [2024-11-04 14:36:19.041851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.288 [2024-11-04 14:36:19.041909] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.288 [2024-11-04 14:36:19.041921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:20.288 [2024-11-04 14:36:19.042326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:20.288 BaseBdev2 00:10:20.288 [2024-11-04 14:36:19.042560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.288 [2024-11-04 14:36:19.042590] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:20.288 [2024-11-04 14:36:19.042897] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.288 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.288 [ 00:10:20.288 { 00:10:20.288 "name": "BaseBdev2", 00:10:20.288 "aliases": [ 00:10:20.288 "85ef6d63-0ad0-4c50-8e39-5d34ef9a03c6" 00:10:20.288 ], 00:10:20.288 "product_name": "Malloc disk", 00:10:20.288 "block_size": 512, 00:10:20.288 "num_blocks": 65536, 00:10:20.288 "uuid": "85ef6d63-0ad0-4c50-8e39-5d34ef9a03c6", 00:10:20.288 "assigned_rate_limits": { 00:10:20.288 "rw_ios_per_sec": 0, 00:10:20.288 "rw_mbytes_per_sec": 0, 00:10:20.288 "r_mbytes_per_sec": 0, 00:10:20.288 "w_mbytes_per_sec": 0 00:10:20.288 }, 00:10:20.288 "claimed": true, 00:10:20.288 "claim_type": "exclusive_write", 00:10:20.288 "zoned": false, 00:10:20.288 "supported_io_types": { 00:10:20.288 "read": true, 00:10:20.288 "write": true, 00:10:20.288 "unmap": true, 00:10:20.288 "flush": true, 00:10:20.288 "reset": true, 00:10:20.289 "nvme_admin": false, 00:10:20.289 "nvme_io": false, 00:10:20.289 "nvme_io_md": false, 00:10:20.289 "write_zeroes": true, 00:10:20.289 "zcopy": true, 00:10:20.289 "get_zone_info": false, 00:10:20.289 "zone_management": false, 00:10:20.289 "zone_append": false, 00:10:20.289 "compare": false, 00:10:20.289 "compare_and_write": false, 00:10:20.289 "abort": true, 00:10:20.289 "seek_hole": false, 00:10:20.289 "seek_data": false, 00:10:20.289 "copy": true, 00:10:20.289 "nvme_iov_md": false 00:10:20.289 }, 00:10:20.289 "memory_domains": [ 00:10:20.289 { 00:10:20.289 "dma_device_id": "system", 00:10:20.289 "dma_device_type": 1 00:10:20.289 }, 00:10:20.289 { 00:10:20.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.289 "dma_device_type": 2 00:10:20.289 } 00:10:20.289 ], 00:10:20.289 "driver_specific": {} 00:10:20.289 } 00:10:20.289 ] 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.289 "name": "Existed_Raid", 00:10:20.289 "uuid": "e5cb5b36-2d4e-4693-8c27-e8f85ffef39c", 00:10:20.289 "strip_size_kb": 0, 00:10:20.289 "state": "online", 00:10:20.289 "raid_level": "raid1", 00:10:20.289 "superblock": false, 00:10:20.289 "num_base_bdevs": 2, 00:10:20.289 "num_base_bdevs_discovered": 2, 00:10:20.289 "num_base_bdevs_operational": 2, 00:10:20.289 "base_bdevs_list": [ 00:10:20.289 { 00:10:20.289 "name": "BaseBdev1", 00:10:20.289 "uuid": "c2102492-37da-464f-9d54-7e165dea72b3", 00:10:20.289 "is_configured": true, 00:10:20.289 "data_offset": 0, 00:10:20.289 "data_size": 65536 00:10:20.289 }, 00:10:20.289 { 00:10:20.289 "name": "BaseBdev2", 00:10:20.289 "uuid": "85ef6d63-0ad0-4c50-8e39-5d34ef9a03c6", 00:10:20.289 "is_configured": true, 00:10:20.289 "data_offset": 0, 00:10:20.289 "data_size": 65536 00:10:20.289 } 00:10:20.289 ] 00:10:20.289 }' 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.289 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.562 [2024-11-04 14:36:19.598522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.562 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.563 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.563 "name": "Existed_Raid", 00:10:20.563 "aliases": [ 00:10:20.563 "e5cb5b36-2d4e-4693-8c27-e8f85ffef39c" 00:10:20.563 ], 00:10:20.563 "product_name": "Raid Volume", 00:10:20.563 "block_size": 512, 00:10:20.563 "num_blocks": 65536, 00:10:20.563 "uuid": "e5cb5b36-2d4e-4693-8c27-e8f85ffef39c", 00:10:20.563 "assigned_rate_limits": { 00:10:20.563 "rw_ios_per_sec": 0, 00:10:20.563 "rw_mbytes_per_sec": 0, 00:10:20.563 "r_mbytes_per_sec": 0, 00:10:20.563 "w_mbytes_per_sec": 0 00:10:20.563 }, 00:10:20.563 "claimed": false, 00:10:20.563 "zoned": false, 00:10:20.563 "supported_io_types": { 00:10:20.563 "read": true, 00:10:20.563 "write": true, 00:10:20.563 "unmap": false, 00:10:20.563 "flush": false, 00:10:20.563 "reset": true, 00:10:20.563 "nvme_admin": false, 00:10:20.563 "nvme_io": false, 00:10:20.563 "nvme_io_md": false, 00:10:20.563 "write_zeroes": true, 00:10:20.563 "zcopy": false, 00:10:20.563 "get_zone_info": false, 00:10:20.563 "zone_management": false, 00:10:20.563 "zone_append": false, 00:10:20.563 "compare": false, 00:10:20.563 "compare_and_write": false, 00:10:20.563 "abort": false, 00:10:20.563 "seek_hole": false, 00:10:20.563 "seek_data": false, 00:10:20.563 "copy": false, 00:10:20.563 "nvme_iov_md": false 00:10:20.563 }, 00:10:20.563 "memory_domains": [ 00:10:20.563 { 00:10:20.563 "dma_device_id": "system", 00:10:20.563 "dma_device_type": 1 00:10:20.563 }, 00:10:20.563 { 00:10:20.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.563 "dma_device_type": 2 00:10:20.563 }, 00:10:20.563 { 00:10:20.563 "dma_device_id": "system", 00:10:20.563 "dma_device_type": 1 00:10:20.563 }, 00:10:20.563 { 00:10:20.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.563 "dma_device_type": 2 00:10:20.563 } 00:10:20.563 ], 00:10:20.563 "driver_specific": { 00:10:20.563 "raid": { 00:10:20.563 "uuid": "e5cb5b36-2d4e-4693-8c27-e8f85ffef39c", 00:10:20.563 "strip_size_kb": 0, 00:10:20.563 "state": "online", 00:10:20.563 "raid_level": "raid1", 00:10:20.563 "superblock": false, 00:10:20.563 "num_base_bdevs": 2, 00:10:20.563 "num_base_bdevs_discovered": 2, 00:10:20.563 "num_base_bdevs_operational": 2, 00:10:20.563 "base_bdevs_list": [ 00:10:20.563 { 00:10:20.563 "name": "BaseBdev1", 00:10:20.563 "uuid": "c2102492-37da-464f-9d54-7e165dea72b3", 00:10:20.563 "is_configured": true, 00:10:20.563 "data_offset": 0, 00:10:20.563 "data_size": 65536 00:10:20.563 }, 00:10:20.563 { 00:10:20.563 "name": "BaseBdev2", 00:10:20.563 "uuid": "85ef6d63-0ad0-4c50-8e39-5d34ef9a03c6", 00:10:20.563 "is_configured": true, 00:10:20.563 "data_offset": 0, 00:10:20.563 "data_size": 65536 00:10:20.563 } 00:10:20.563 ] 00:10:20.563 } 00:10:20.563 } 00:10:20.563 }' 00:10:20.563 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:20.825 BaseBdev2' 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.825 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.825 [2024-11-04 14:36:19.882275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.091 14:36:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.091 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.091 "name": "Existed_Raid", 00:10:21.091 "uuid": "e5cb5b36-2d4e-4693-8c27-e8f85ffef39c", 00:10:21.091 "strip_size_kb": 0, 00:10:21.091 "state": "online", 00:10:21.091 "raid_level": "raid1", 00:10:21.091 "superblock": false, 00:10:21.091 "num_base_bdevs": 2, 00:10:21.091 "num_base_bdevs_discovered": 1, 00:10:21.091 "num_base_bdevs_operational": 1, 00:10:21.091 "base_bdevs_list": [ 00:10:21.091 { 00:10:21.091 "name": null, 00:10:21.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.091 "is_configured": false, 00:10:21.091 "data_offset": 0, 00:10:21.091 "data_size": 65536 00:10:21.091 }, 00:10:21.091 { 00:10:21.091 "name": "BaseBdev2", 00:10:21.091 "uuid": "85ef6d63-0ad0-4c50-8e39-5d34ef9a03c6", 00:10:21.091 "is_configured": true, 00:10:21.091 "data_offset": 0, 00:10:21.091 "data_size": 65536 00:10:21.091 } 00:10:21.091 ] 00:10:21.091 }' 00:10:21.091 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.091 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.359 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:21.359 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.359 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.359 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:21.359 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.359 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.631 [2024-11-04 14:36:20.527450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:21.631 [2024-11-04 14:36:20.527754] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.631 [2024-11-04 14:36:20.615507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.631 [2024-11-04 14:36:20.615721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.631 [2024-11-04 14:36:20.615877] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.631 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62675 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62675 ']' 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62675 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62675 00:10:21.632 killing process with pid 62675 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62675' 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62675 00:10:21.632 [2024-11-04 14:36:20.701100] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.632 14:36:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62675 00:10:21.632 [2024-11-04 14:36:20.716372] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:23.051 00:10:23.051 real 0m5.500s 00:10:23.051 user 0m8.314s 00:10:23.051 sys 0m0.777s 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.051 ************************************ 00:10:23.051 END TEST raid_state_function_test 00:10:23.051 ************************************ 00:10:23.051 14:36:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:23.051 14:36:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:23.051 14:36:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.051 14:36:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.051 ************************************ 00:10:23.051 START TEST raid_state_function_test_sb 00:10:23.051 ************************************ 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62934 00:10:23.051 Process raid pid: 62934 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62934' 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62934 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62934 ']' 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:23.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:23.051 14:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.051 [2024-11-04 14:36:21.925920] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:10:23.051 [2024-11-04 14:36:21.926129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.051 [2024-11-04 14:36:22.117942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.322 [2024-11-04 14:36:22.277034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.580 [2024-11-04 14:36:22.487881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.580 [2024-11-04 14:36:22.487951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:23.839 [2024-11-04 14:36:22.953317] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:23.839 [2024-11-04 14:36:22.953376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:23.839 [2024-11-04 14:36:22.953392] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:23.839 [2024-11-04 14:36:22.953408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.839 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.098 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.098 14:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.098 14:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.098 14:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.098 14:36:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.098 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.098 "name": "Existed_Raid", 00:10:24.098 "uuid": "9a3439b3-087b-47d3-9b7e-3c2092a3947d", 00:10:24.098 "strip_size_kb": 0, 00:10:24.098 "state": "configuring", 00:10:24.098 "raid_level": "raid1", 00:10:24.098 "superblock": true, 00:10:24.098 "num_base_bdevs": 2, 00:10:24.098 "num_base_bdevs_discovered": 0, 00:10:24.098 "num_base_bdevs_operational": 2, 00:10:24.098 "base_bdevs_list": [ 00:10:24.098 { 00:10:24.098 "name": "BaseBdev1", 00:10:24.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.098 "is_configured": false, 00:10:24.098 "data_offset": 0, 00:10:24.098 "data_size": 0 00:10:24.098 }, 00:10:24.098 { 00:10:24.098 "name": "BaseBdev2", 00:10:24.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.098 "is_configured": false, 00:10:24.098 "data_offset": 0, 00:10:24.098 "data_size": 0 00:10:24.098 } 00:10:24.098 ] 00:10:24.098 }' 00:10:24.098 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.098 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.356 [2024-11-04 14:36:23.457390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.356 [2024-11-04 14:36:23.457434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.356 [2024-11-04 14:36:23.469380] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.356 [2024-11-04 14:36:23.469428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.356 [2024-11-04 14:36:23.469442] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.356 [2024-11-04 14:36:23.469461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.356 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.634 [2024-11-04 14:36:23.519571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.634 BaseBdev1 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.634 [ 00:10:24.634 { 00:10:24.634 "name": "BaseBdev1", 00:10:24.634 "aliases": [ 00:10:24.634 "16e2771c-9c4e-4bd7-92b4-67a293d545d4" 00:10:24.634 ], 00:10:24.634 "product_name": "Malloc disk", 00:10:24.634 "block_size": 512, 00:10:24.634 "num_blocks": 65536, 00:10:24.634 "uuid": "16e2771c-9c4e-4bd7-92b4-67a293d545d4", 00:10:24.634 "assigned_rate_limits": { 00:10:24.634 "rw_ios_per_sec": 0, 00:10:24.634 "rw_mbytes_per_sec": 0, 00:10:24.634 "r_mbytes_per_sec": 0, 00:10:24.634 "w_mbytes_per_sec": 0 00:10:24.634 }, 00:10:24.634 "claimed": true, 00:10:24.634 "claim_type": "exclusive_write", 00:10:24.634 "zoned": false, 00:10:24.634 "supported_io_types": { 00:10:24.634 "read": true, 00:10:24.634 "write": true, 00:10:24.634 "unmap": true, 00:10:24.634 "flush": true, 00:10:24.634 "reset": true, 00:10:24.634 "nvme_admin": false, 00:10:24.634 "nvme_io": false, 00:10:24.634 "nvme_io_md": false, 00:10:24.634 "write_zeroes": true, 00:10:24.634 "zcopy": true, 00:10:24.634 "get_zone_info": false, 00:10:24.634 "zone_management": false, 00:10:24.634 "zone_append": false, 00:10:24.634 "compare": false, 00:10:24.634 "compare_and_write": false, 00:10:24.634 "abort": true, 00:10:24.634 "seek_hole": false, 00:10:24.634 "seek_data": false, 00:10:24.634 "copy": true, 00:10:24.634 "nvme_iov_md": false 00:10:24.634 }, 00:10:24.634 "memory_domains": [ 00:10:24.634 { 00:10:24.634 "dma_device_id": "system", 00:10:24.634 "dma_device_type": 1 00:10:24.634 }, 00:10:24.634 { 00:10:24.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.634 "dma_device_type": 2 00:10:24.634 } 00:10:24.634 ], 00:10:24.634 "driver_specific": {} 00:10:24.634 } 00:10:24.634 ] 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.634 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.634 "name": "Existed_Raid", 00:10:24.634 "uuid": "bfde860e-f199-459c-ad5e-0654156c9d8b", 00:10:24.634 "strip_size_kb": 0, 00:10:24.634 "state": "configuring", 00:10:24.634 "raid_level": "raid1", 00:10:24.634 "superblock": true, 00:10:24.634 "num_base_bdevs": 2, 00:10:24.634 "num_base_bdevs_discovered": 1, 00:10:24.634 "num_base_bdevs_operational": 2, 00:10:24.634 "base_bdevs_list": [ 00:10:24.634 { 00:10:24.634 "name": "BaseBdev1", 00:10:24.634 "uuid": "16e2771c-9c4e-4bd7-92b4-67a293d545d4", 00:10:24.634 "is_configured": true, 00:10:24.634 "data_offset": 2048, 00:10:24.634 "data_size": 63488 00:10:24.634 }, 00:10:24.634 { 00:10:24.635 "name": "BaseBdev2", 00:10:24.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.635 "is_configured": false, 00:10:24.635 "data_offset": 0, 00:10:24.635 "data_size": 0 00:10:24.635 } 00:10:24.635 ] 00:10:24.635 }' 00:10:24.635 14:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.635 14:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.203 [2024-11-04 14:36:24.055800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.203 [2024-11-04 14:36:24.055866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.203 [2024-11-04 14:36:24.063850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.203 [2024-11-04 14:36:24.066313] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.203 [2024-11-04 14:36:24.066366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.203 "name": "Existed_Raid", 00:10:25.203 "uuid": "eccd4c8e-4ced-4b79-952a-648b37af6579", 00:10:25.203 "strip_size_kb": 0, 00:10:25.203 "state": "configuring", 00:10:25.203 "raid_level": "raid1", 00:10:25.203 "superblock": true, 00:10:25.203 "num_base_bdevs": 2, 00:10:25.203 "num_base_bdevs_discovered": 1, 00:10:25.203 "num_base_bdevs_operational": 2, 00:10:25.203 "base_bdevs_list": [ 00:10:25.203 { 00:10:25.203 "name": "BaseBdev1", 00:10:25.203 "uuid": "16e2771c-9c4e-4bd7-92b4-67a293d545d4", 00:10:25.203 "is_configured": true, 00:10:25.203 "data_offset": 2048, 00:10:25.203 "data_size": 63488 00:10:25.203 }, 00:10:25.203 { 00:10:25.203 "name": "BaseBdev2", 00:10:25.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.203 "is_configured": false, 00:10:25.203 "data_offset": 0, 00:10:25.203 "data_size": 0 00:10:25.203 } 00:10:25.203 ] 00:10:25.203 }' 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.203 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.463 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.463 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.463 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.722 [2024-11-04 14:36:24.602822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.722 [2024-11-04 14:36:24.603170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:25.722 [2024-11-04 14:36:24.603195] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:25.722 [2024-11-04 14:36:24.603518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:25.722 BaseBdev2 00:10:25.722 [2024-11-04 14:36:24.603722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:25.722 [2024-11-04 14:36:24.603743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:25.722 [2024-11-04 14:36:24.603916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.722 [ 00:10:25.722 { 00:10:25.722 "name": "BaseBdev2", 00:10:25.722 "aliases": [ 00:10:25.722 "27535865-9c9f-4e12-91a5-13937682efb3" 00:10:25.722 ], 00:10:25.722 "product_name": "Malloc disk", 00:10:25.722 "block_size": 512, 00:10:25.722 "num_blocks": 65536, 00:10:25.722 "uuid": "27535865-9c9f-4e12-91a5-13937682efb3", 00:10:25.722 "assigned_rate_limits": { 00:10:25.722 "rw_ios_per_sec": 0, 00:10:25.722 "rw_mbytes_per_sec": 0, 00:10:25.722 "r_mbytes_per_sec": 0, 00:10:25.722 "w_mbytes_per_sec": 0 00:10:25.722 }, 00:10:25.722 "claimed": true, 00:10:25.722 "claim_type": "exclusive_write", 00:10:25.722 "zoned": false, 00:10:25.722 "supported_io_types": { 00:10:25.722 "read": true, 00:10:25.722 "write": true, 00:10:25.722 "unmap": true, 00:10:25.722 "flush": true, 00:10:25.722 "reset": true, 00:10:25.722 "nvme_admin": false, 00:10:25.722 "nvme_io": false, 00:10:25.722 "nvme_io_md": false, 00:10:25.722 "write_zeroes": true, 00:10:25.722 "zcopy": true, 00:10:25.722 "get_zone_info": false, 00:10:25.722 "zone_management": false, 00:10:25.722 "zone_append": false, 00:10:25.722 "compare": false, 00:10:25.722 "compare_and_write": false, 00:10:25.722 "abort": true, 00:10:25.722 "seek_hole": false, 00:10:25.722 "seek_data": false, 00:10:25.722 "copy": true, 00:10:25.722 "nvme_iov_md": false 00:10:25.722 }, 00:10:25.722 "memory_domains": [ 00:10:25.722 { 00:10:25.722 "dma_device_id": "system", 00:10:25.722 "dma_device_type": 1 00:10:25.722 }, 00:10:25.722 { 00:10:25.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.722 "dma_device_type": 2 00:10:25.722 } 00:10:25.722 ], 00:10:25.722 "driver_specific": {} 00:10:25.722 } 00:10:25.722 ] 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.722 "name": "Existed_Raid", 00:10:25.722 "uuid": "eccd4c8e-4ced-4b79-952a-648b37af6579", 00:10:25.722 "strip_size_kb": 0, 00:10:25.722 "state": "online", 00:10:25.722 "raid_level": "raid1", 00:10:25.722 "superblock": true, 00:10:25.722 "num_base_bdevs": 2, 00:10:25.722 "num_base_bdevs_discovered": 2, 00:10:25.722 "num_base_bdevs_operational": 2, 00:10:25.722 "base_bdevs_list": [ 00:10:25.722 { 00:10:25.722 "name": "BaseBdev1", 00:10:25.722 "uuid": "16e2771c-9c4e-4bd7-92b4-67a293d545d4", 00:10:25.722 "is_configured": true, 00:10:25.722 "data_offset": 2048, 00:10:25.722 "data_size": 63488 00:10:25.722 }, 00:10:25.722 { 00:10:25.722 "name": "BaseBdev2", 00:10:25.722 "uuid": "27535865-9c9f-4e12-91a5-13937682efb3", 00:10:25.722 "is_configured": true, 00:10:25.722 "data_offset": 2048, 00:10:25.722 "data_size": 63488 00:10:25.722 } 00:10:25.722 ] 00:10:25.722 }' 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.722 14:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:26.289 [2024-11-04 14:36:25.135382] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.289 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:26.289 "name": "Existed_Raid", 00:10:26.289 "aliases": [ 00:10:26.289 "eccd4c8e-4ced-4b79-952a-648b37af6579" 00:10:26.289 ], 00:10:26.289 "product_name": "Raid Volume", 00:10:26.289 "block_size": 512, 00:10:26.289 "num_blocks": 63488, 00:10:26.289 "uuid": "eccd4c8e-4ced-4b79-952a-648b37af6579", 00:10:26.289 "assigned_rate_limits": { 00:10:26.289 "rw_ios_per_sec": 0, 00:10:26.289 "rw_mbytes_per_sec": 0, 00:10:26.289 "r_mbytes_per_sec": 0, 00:10:26.289 "w_mbytes_per_sec": 0 00:10:26.289 }, 00:10:26.289 "claimed": false, 00:10:26.289 "zoned": false, 00:10:26.289 "supported_io_types": { 00:10:26.289 "read": true, 00:10:26.289 "write": true, 00:10:26.289 "unmap": false, 00:10:26.289 "flush": false, 00:10:26.289 "reset": true, 00:10:26.289 "nvme_admin": false, 00:10:26.289 "nvme_io": false, 00:10:26.289 "nvme_io_md": false, 00:10:26.289 "write_zeroes": true, 00:10:26.289 "zcopy": false, 00:10:26.289 "get_zone_info": false, 00:10:26.289 "zone_management": false, 00:10:26.289 "zone_append": false, 00:10:26.289 "compare": false, 00:10:26.289 "compare_and_write": false, 00:10:26.289 "abort": false, 00:10:26.289 "seek_hole": false, 00:10:26.289 "seek_data": false, 00:10:26.289 "copy": false, 00:10:26.289 "nvme_iov_md": false 00:10:26.289 }, 00:10:26.289 "memory_domains": [ 00:10:26.289 { 00:10:26.289 "dma_device_id": "system", 00:10:26.289 "dma_device_type": 1 00:10:26.289 }, 00:10:26.289 { 00:10:26.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.289 "dma_device_type": 2 00:10:26.289 }, 00:10:26.289 { 00:10:26.289 "dma_device_id": "system", 00:10:26.289 "dma_device_type": 1 00:10:26.289 }, 00:10:26.289 { 00:10:26.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.289 "dma_device_type": 2 00:10:26.289 } 00:10:26.289 ], 00:10:26.289 "driver_specific": { 00:10:26.289 "raid": { 00:10:26.289 "uuid": "eccd4c8e-4ced-4b79-952a-648b37af6579", 00:10:26.290 "strip_size_kb": 0, 00:10:26.290 "state": "online", 00:10:26.290 "raid_level": "raid1", 00:10:26.290 "superblock": true, 00:10:26.290 "num_base_bdevs": 2, 00:10:26.290 "num_base_bdevs_discovered": 2, 00:10:26.290 "num_base_bdevs_operational": 2, 00:10:26.290 "base_bdevs_list": [ 00:10:26.290 { 00:10:26.290 "name": "BaseBdev1", 00:10:26.290 "uuid": "16e2771c-9c4e-4bd7-92b4-67a293d545d4", 00:10:26.290 "is_configured": true, 00:10:26.290 "data_offset": 2048, 00:10:26.290 "data_size": 63488 00:10:26.290 }, 00:10:26.290 { 00:10:26.290 "name": "BaseBdev2", 00:10:26.290 "uuid": "27535865-9c9f-4e12-91a5-13937682efb3", 00:10:26.290 "is_configured": true, 00:10:26.290 "data_offset": 2048, 00:10:26.290 "data_size": 63488 00:10:26.290 } 00:10:26.290 ] 00:10:26.290 } 00:10:26.290 } 00:10:26.290 }' 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:26.290 BaseBdev2' 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.290 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.290 [2024-11-04 14:36:25.391216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.549 "name": "Existed_Raid", 00:10:26.549 "uuid": "eccd4c8e-4ced-4b79-952a-648b37af6579", 00:10:26.549 "strip_size_kb": 0, 00:10:26.549 "state": "online", 00:10:26.549 "raid_level": "raid1", 00:10:26.549 "superblock": true, 00:10:26.549 "num_base_bdevs": 2, 00:10:26.549 "num_base_bdevs_discovered": 1, 00:10:26.549 "num_base_bdevs_operational": 1, 00:10:26.549 "base_bdevs_list": [ 00:10:26.549 { 00:10:26.549 "name": null, 00:10:26.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.549 "is_configured": false, 00:10:26.549 "data_offset": 0, 00:10:26.549 "data_size": 63488 00:10:26.549 }, 00:10:26.549 { 00:10:26.549 "name": "BaseBdev2", 00:10:26.549 "uuid": "27535865-9c9f-4e12-91a5-13937682efb3", 00:10:26.549 "is_configured": true, 00:10:26.549 "data_offset": 2048, 00:10:26.549 "data_size": 63488 00:10:26.549 } 00:10:26.549 ] 00:10:26.549 }' 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.549 14:36:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.115 [2024-11-04 14:36:26.052940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.115 [2024-11-04 14:36:26.053076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.115 [2024-11-04 14:36:26.141663] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.115 [2024-11-04 14:36:26.141745] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.115 [2024-11-04 14:36:26.141767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62934 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62934 ']' 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62934 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62934 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:27.115 killing process with pid 62934 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62934' 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62934 00:10:27.115 [2024-11-04 14:36:26.231761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.115 14:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62934 00:10:27.374 [2024-11-04 14:36:26.246823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.310 14:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:28.310 00:10:28.310 real 0m5.486s 00:10:28.310 user 0m8.275s 00:10:28.310 sys 0m0.789s 00:10:28.310 14:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:28.310 14:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:28.310 ************************************ 00:10:28.310 END TEST raid_state_function_test_sb 00:10:28.310 ************************************ 00:10:28.310 14:36:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:28.310 14:36:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:28.310 14:36:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.310 14:36:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.310 ************************************ 00:10:28.310 START TEST raid_superblock_test 00:10:28.310 ************************************ 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63186 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63186 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63186 ']' 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:28.310 14:36:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.569 [2024-11-04 14:36:27.466024] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:10:28.569 [2024-11-04 14:36:27.466196] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63186 ] 00:10:28.569 [2024-11-04 14:36:27.662327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.827 [2024-11-04 14:36:27.796308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.085 [2024-11-04 14:36:28.004794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.085 [2024-11-04 14:36:28.004871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.653 malloc1 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.653 [2024-11-04 14:36:28.542910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:29.653 [2024-11-04 14:36:28.543012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.653 [2024-11-04 14:36:28.543046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:29.653 [2024-11-04 14:36:28.543061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.653 [2024-11-04 14:36:28.545875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.653 [2024-11-04 14:36:28.545917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:29.653 pt1 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.653 malloc2 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.653 [2024-11-04 14:36:28.596592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:29.653 [2024-11-04 14:36:28.596656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.653 [2024-11-04 14:36:28.596686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:29.653 [2024-11-04 14:36:28.596702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.653 [2024-11-04 14:36:28.599446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.653 [2024-11-04 14:36:28.599502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:29.653 pt2 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.653 [2024-11-04 14:36:28.604660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:29.653 [2024-11-04 14:36:28.607109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:29.653 [2024-11-04 14:36:28.607325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:29.653 [2024-11-04 14:36:28.607349] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.653 [2024-11-04 14:36:28.607670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:29.653 [2024-11-04 14:36:28.607879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:29.653 [2024-11-04 14:36:28.607914] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:29.653 [2024-11-04 14:36:28.608119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.653 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.654 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.654 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.654 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.654 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.654 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.654 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.654 "name": "raid_bdev1", 00:10:29.654 "uuid": "8533e7d4-91ce-485e-a386-8c4ebdf9a27d", 00:10:29.654 "strip_size_kb": 0, 00:10:29.654 "state": "online", 00:10:29.654 "raid_level": "raid1", 00:10:29.654 "superblock": true, 00:10:29.654 "num_base_bdevs": 2, 00:10:29.654 "num_base_bdevs_discovered": 2, 00:10:29.654 "num_base_bdevs_operational": 2, 00:10:29.654 "base_bdevs_list": [ 00:10:29.654 { 00:10:29.654 "name": "pt1", 00:10:29.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:29.654 "is_configured": true, 00:10:29.654 "data_offset": 2048, 00:10:29.654 "data_size": 63488 00:10:29.654 }, 00:10:29.654 { 00:10:29.654 "name": "pt2", 00:10:29.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:29.654 "is_configured": true, 00:10:29.654 "data_offset": 2048, 00:10:29.654 "data_size": 63488 00:10:29.654 } 00:10:29.654 ] 00:10:29.654 }' 00:10:29.654 14:36:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.654 14:36:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.221 [2024-11-04 14:36:29.105118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:30.221 "name": "raid_bdev1", 00:10:30.221 "aliases": [ 00:10:30.221 "8533e7d4-91ce-485e-a386-8c4ebdf9a27d" 00:10:30.221 ], 00:10:30.221 "product_name": "Raid Volume", 00:10:30.221 "block_size": 512, 00:10:30.221 "num_blocks": 63488, 00:10:30.221 "uuid": "8533e7d4-91ce-485e-a386-8c4ebdf9a27d", 00:10:30.221 "assigned_rate_limits": { 00:10:30.221 "rw_ios_per_sec": 0, 00:10:30.221 "rw_mbytes_per_sec": 0, 00:10:30.221 "r_mbytes_per_sec": 0, 00:10:30.221 "w_mbytes_per_sec": 0 00:10:30.221 }, 00:10:30.221 "claimed": false, 00:10:30.221 "zoned": false, 00:10:30.221 "supported_io_types": { 00:10:30.221 "read": true, 00:10:30.221 "write": true, 00:10:30.221 "unmap": false, 00:10:30.221 "flush": false, 00:10:30.221 "reset": true, 00:10:30.221 "nvme_admin": false, 00:10:30.221 "nvme_io": false, 00:10:30.221 "nvme_io_md": false, 00:10:30.221 "write_zeroes": true, 00:10:30.221 "zcopy": false, 00:10:30.221 "get_zone_info": false, 00:10:30.221 "zone_management": false, 00:10:30.221 "zone_append": false, 00:10:30.221 "compare": false, 00:10:30.221 "compare_and_write": false, 00:10:30.221 "abort": false, 00:10:30.221 "seek_hole": false, 00:10:30.221 "seek_data": false, 00:10:30.221 "copy": false, 00:10:30.221 "nvme_iov_md": false 00:10:30.221 }, 00:10:30.221 "memory_domains": [ 00:10:30.221 { 00:10:30.221 "dma_device_id": "system", 00:10:30.221 "dma_device_type": 1 00:10:30.221 }, 00:10:30.221 { 00:10:30.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.221 "dma_device_type": 2 00:10:30.221 }, 00:10:30.221 { 00:10:30.221 "dma_device_id": "system", 00:10:30.221 "dma_device_type": 1 00:10:30.221 }, 00:10:30.221 { 00:10:30.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.221 "dma_device_type": 2 00:10:30.221 } 00:10:30.221 ], 00:10:30.221 "driver_specific": { 00:10:30.221 "raid": { 00:10:30.221 "uuid": "8533e7d4-91ce-485e-a386-8c4ebdf9a27d", 00:10:30.221 "strip_size_kb": 0, 00:10:30.221 "state": "online", 00:10:30.221 "raid_level": "raid1", 00:10:30.221 "superblock": true, 00:10:30.221 "num_base_bdevs": 2, 00:10:30.221 "num_base_bdevs_discovered": 2, 00:10:30.221 "num_base_bdevs_operational": 2, 00:10:30.221 "base_bdevs_list": [ 00:10:30.221 { 00:10:30.221 "name": "pt1", 00:10:30.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.221 "is_configured": true, 00:10:30.221 "data_offset": 2048, 00:10:30.221 "data_size": 63488 00:10:30.221 }, 00:10:30.221 { 00:10:30.221 "name": "pt2", 00:10:30.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.221 "is_configured": true, 00:10:30.221 "data_offset": 2048, 00:10:30.221 "data_size": 63488 00:10:30.221 } 00:10:30.221 ] 00:10:30.221 } 00:10:30.221 } 00:10:30.221 }' 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:30.221 pt2' 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.221 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.480 [2024-11-04 14:36:29.373186] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8533e7d4-91ce-485e-a386-8c4ebdf9a27d 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8533e7d4-91ce-485e-a386-8c4ebdf9a27d ']' 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.480 [2024-11-04 14:36:29.412774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.480 [2024-11-04 14:36:29.412801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.480 [2024-11-04 14:36:29.412904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.480 [2024-11-04 14:36:29.413011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.480 [2024-11-04 14:36:29.413032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.480 [2024-11-04 14:36:29.552885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:30.480 [2024-11-04 14:36:29.555496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:30.480 [2024-11-04 14:36:29.555604] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:30.480 [2024-11-04 14:36:29.555676] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:30.480 [2024-11-04 14:36:29.555701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.480 [2024-11-04 14:36:29.555716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:30.480 request: 00:10:30.480 { 00:10:30.480 "name": "raid_bdev1", 00:10:30.480 "raid_level": "raid1", 00:10:30.480 "base_bdevs": [ 00:10:30.480 "malloc1", 00:10:30.480 "malloc2" 00:10:30.480 ], 00:10:30.480 "superblock": false, 00:10:30.480 "method": "bdev_raid_create", 00:10:30.480 "req_id": 1 00:10:30.480 } 00:10:30.480 Got JSON-RPC error response 00:10:30.480 response: 00:10:30.480 { 00:10:30.480 "code": -17, 00:10:30.480 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:30.480 } 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:30.480 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.738 [2024-11-04 14:36:29.620847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:30.738 [2024-11-04 14:36:29.620920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.738 [2024-11-04 14:36:29.620972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:30.738 [2024-11-04 14:36:29.620992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.738 [2024-11-04 14:36:29.623791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.738 [2024-11-04 14:36:29.623848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:30.738 [2024-11-04 14:36:29.623951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:30.738 [2024-11-04 14:36:29.624064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:30.738 pt1 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.738 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.738 "name": "raid_bdev1", 00:10:30.738 "uuid": "8533e7d4-91ce-485e-a386-8c4ebdf9a27d", 00:10:30.738 "strip_size_kb": 0, 00:10:30.738 "state": "configuring", 00:10:30.738 "raid_level": "raid1", 00:10:30.738 "superblock": true, 00:10:30.738 "num_base_bdevs": 2, 00:10:30.738 "num_base_bdevs_discovered": 1, 00:10:30.738 "num_base_bdevs_operational": 2, 00:10:30.738 "base_bdevs_list": [ 00:10:30.738 { 00:10:30.738 "name": "pt1", 00:10:30.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:30.739 "is_configured": true, 00:10:30.739 "data_offset": 2048, 00:10:30.739 "data_size": 63488 00:10:30.739 }, 00:10:30.739 { 00:10:30.739 "name": null, 00:10:30.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:30.739 "is_configured": false, 00:10:30.739 "data_offset": 2048, 00:10:30.739 "data_size": 63488 00:10:30.739 } 00:10:30.739 ] 00:10:30.739 }' 00:10:30.739 14:36:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.739 14:36:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.305 [2024-11-04 14:36:30.137098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:31.305 [2024-11-04 14:36:30.137177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.305 [2024-11-04 14:36:30.137208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:31.305 [2024-11-04 14:36:30.137226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.305 [2024-11-04 14:36:30.137819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.305 [2024-11-04 14:36:30.137852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:31.305 [2024-11-04 14:36:30.137949] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:31.305 [2024-11-04 14:36:30.138027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:31.305 [2024-11-04 14:36:30.138175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:31.305 [2024-11-04 14:36:30.138203] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:31.305 [2024-11-04 14:36:30.138503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:31.305 [2024-11-04 14:36:30.138700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:31.305 [2024-11-04 14:36:30.138740] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:31.305 [2024-11-04 14:36:30.138913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.305 pt2 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.305 "name": "raid_bdev1", 00:10:31.305 "uuid": "8533e7d4-91ce-485e-a386-8c4ebdf9a27d", 00:10:31.305 "strip_size_kb": 0, 00:10:31.305 "state": "online", 00:10:31.305 "raid_level": "raid1", 00:10:31.305 "superblock": true, 00:10:31.305 "num_base_bdevs": 2, 00:10:31.305 "num_base_bdevs_discovered": 2, 00:10:31.305 "num_base_bdevs_operational": 2, 00:10:31.305 "base_bdevs_list": [ 00:10:31.305 { 00:10:31.305 "name": "pt1", 00:10:31.305 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.305 "is_configured": true, 00:10:31.305 "data_offset": 2048, 00:10:31.305 "data_size": 63488 00:10:31.305 }, 00:10:31.305 { 00:10:31.305 "name": "pt2", 00:10:31.305 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.305 "is_configured": true, 00:10:31.305 "data_offset": 2048, 00:10:31.305 "data_size": 63488 00:10:31.305 } 00:10:31.305 ] 00:10:31.305 }' 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.305 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.872 [2024-11-04 14:36:30.701584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:31.872 "name": "raid_bdev1", 00:10:31.872 "aliases": [ 00:10:31.872 "8533e7d4-91ce-485e-a386-8c4ebdf9a27d" 00:10:31.872 ], 00:10:31.872 "product_name": "Raid Volume", 00:10:31.872 "block_size": 512, 00:10:31.872 "num_blocks": 63488, 00:10:31.872 "uuid": "8533e7d4-91ce-485e-a386-8c4ebdf9a27d", 00:10:31.872 "assigned_rate_limits": { 00:10:31.872 "rw_ios_per_sec": 0, 00:10:31.872 "rw_mbytes_per_sec": 0, 00:10:31.872 "r_mbytes_per_sec": 0, 00:10:31.872 "w_mbytes_per_sec": 0 00:10:31.872 }, 00:10:31.872 "claimed": false, 00:10:31.872 "zoned": false, 00:10:31.872 "supported_io_types": { 00:10:31.872 "read": true, 00:10:31.872 "write": true, 00:10:31.872 "unmap": false, 00:10:31.872 "flush": false, 00:10:31.872 "reset": true, 00:10:31.872 "nvme_admin": false, 00:10:31.872 "nvme_io": false, 00:10:31.872 "nvme_io_md": false, 00:10:31.872 "write_zeroes": true, 00:10:31.872 "zcopy": false, 00:10:31.872 "get_zone_info": false, 00:10:31.872 "zone_management": false, 00:10:31.872 "zone_append": false, 00:10:31.872 "compare": false, 00:10:31.872 "compare_and_write": false, 00:10:31.872 "abort": false, 00:10:31.872 "seek_hole": false, 00:10:31.872 "seek_data": false, 00:10:31.872 "copy": false, 00:10:31.872 "nvme_iov_md": false 00:10:31.872 }, 00:10:31.872 "memory_domains": [ 00:10:31.872 { 00:10:31.872 "dma_device_id": "system", 00:10:31.872 "dma_device_type": 1 00:10:31.872 }, 00:10:31.872 { 00:10:31.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.872 "dma_device_type": 2 00:10:31.872 }, 00:10:31.872 { 00:10:31.872 "dma_device_id": "system", 00:10:31.872 "dma_device_type": 1 00:10:31.872 }, 00:10:31.872 { 00:10:31.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.872 "dma_device_type": 2 00:10:31.872 } 00:10:31.872 ], 00:10:31.872 "driver_specific": { 00:10:31.872 "raid": { 00:10:31.872 "uuid": "8533e7d4-91ce-485e-a386-8c4ebdf9a27d", 00:10:31.872 "strip_size_kb": 0, 00:10:31.872 "state": "online", 00:10:31.872 "raid_level": "raid1", 00:10:31.872 "superblock": true, 00:10:31.872 "num_base_bdevs": 2, 00:10:31.872 "num_base_bdevs_discovered": 2, 00:10:31.872 "num_base_bdevs_operational": 2, 00:10:31.872 "base_bdevs_list": [ 00:10:31.872 { 00:10:31.872 "name": "pt1", 00:10:31.872 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:31.872 "is_configured": true, 00:10:31.872 "data_offset": 2048, 00:10:31.872 "data_size": 63488 00:10:31.872 }, 00:10:31.872 { 00:10:31.872 "name": "pt2", 00:10:31.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:31.872 "is_configured": true, 00:10:31.872 "data_offset": 2048, 00:10:31.872 "data_size": 63488 00:10:31.872 } 00:10:31.872 ] 00:10:31.872 } 00:10:31.872 } 00:10:31.872 }' 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:31.872 pt2' 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:31.872 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:31.873 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.873 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.873 14:36:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:31.873 [2024-11-04 14:36:30.961638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.873 14:36:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8533e7d4-91ce-485e-a386-8c4ebdf9a27d '!=' 8533e7d4-91ce-485e-a386-8c4ebdf9a27d ']' 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.131 [2024-11-04 14:36:31.013416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.131 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.131 "name": "raid_bdev1", 00:10:32.131 "uuid": "8533e7d4-91ce-485e-a386-8c4ebdf9a27d", 00:10:32.131 "strip_size_kb": 0, 00:10:32.131 "state": "online", 00:10:32.131 "raid_level": "raid1", 00:10:32.131 "superblock": true, 00:10:32.131 "num_base_bdevs": 2, 00:10:32.132 "num_base_bdevs_discovered": 1, 00:10:32.132 "num_base_bdevs_operational": 1, 00:10:32.132 "base_bdevs_list": [ 00:10:32.132 { 00:10:32.132 "name": null, 00:10:32.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.132 "is_configured": false, 00:10:32.132 "data_offset": 0, 00:10:32.132 "data_size": 63488 00:10:32.132 }, 00:10:32.132 { 00:10:32.132 "name": "pt2", 00:10:32.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.132 "is_configured": true, 00:10:32.132 "data_offset": 2048, 00:10:32.132 "data_size": 63488 00:10:32.132 } 00:10:32.132 ] 00:10:32.132 }' 00:10:32.132 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.132 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.699 [2024-11-04 14:36:31.533559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.699 [2024-11-04 14:36:31.533612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.699 [2024-11-04 14:36:31.533726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.699 [2024-11-04 14:36:31.533794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.699 [2024-11-04 14:36:31.533813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.699 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.699 [2024-11-04 14:36:31.605526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:32.699 [2024-11-04 14:36:31.605610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.699 [2024-11-04 14:36:31.605639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:32.699 [2024-11-04 14:36:31.605657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.699 [2024-11-04 14:36:31.608694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.699 [2024-11-04 14:36:31.608753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:32.699 [2024-11-04 14:36:31.608872] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:32.699 [2024-11-04 14:36:31.608969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:32.699 [2024-11-04 14:36:31.609122] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:32.699 [2024-11-04 14:36:31.609151] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:32.699 [2024-11-04 14:36:31.609444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:32.699 [2024-11-04 14:36:31.609651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:32.700 [2024-11-04 14:36:31.609677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:32.700 [2024-11-04 14:36:31.609901] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.700 pt2 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.700 "name": "raid_bdev1", 00:10:32.700 "uuid": "8533e7d4-91ce-485e-a386-8c4ebdf9a27d", 00:10:32.700 "strip_size_kb": 0, 00:10:32.700 "state": "online", 00:10:32.700 "raid_level": "raid1", 00:10:32.700 "superblock": true, 00:10:32.700 "num_base_bdevs": 2, 00:10:32.700 "num_base_bdevs_discovered": 1, 00:10:32.700 "num_base_bdevs_operational": 1, 00:10:32.700 "base_bdevs_list": [ 00:10:32.700 { 00:10:32.700 "name": null, 00:10:32.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.700 "is_configured": false, 00:10:32.700 "data_offset": 2048, 00:10:32.700 "data_size": 63488 00:10:32.700 }, 00:10:32.700 { 00:10:32.700 "name": "pt2", 00:10:32.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:32.700 "is_configured": true, 00:10:32.700 "data_offset": 2048, 00:10:32.700 "data_size": 63488 00:10:32.700 } 00:10:32.700 ] 00:10:32.700 }' 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.700 14:36:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.266 [2024-11-04 14:36:32.117976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.266 [2024-11-04 14:36:32.118016] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.266 [2024-11-04 14:36:32.118130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.266 [2024-11-04 14:36:32.118209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.266 [2024-11-04 14:36:32.118226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.266 [2024-11-04 14:36:32.181993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:33.266 [2024-11-04 14:36:32.182066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.266 [2024-11-04 14:36:32.182097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:10:33.266 [2024-11-04 14:36:32.182113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.266 [2024-11-04 14:36:32.185089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.266 [2024-11-04 14:36:32.185135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:33.266 [2024-11-04 14:36:32.185253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:33.266 [2024-11-04 14:36:32.185314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:33.266 [2024-11-04 14:36:32.185490] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:33.266 [2024-11-04 14:36:32.185515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:33.266 [2024-11-04 14:36:32.185538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:33.266 [2024-11-04 14:36:32.185617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:33.266 [2024-11-04 14:36:32.185733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:33.266 [2024-11-04 14:36:32.185749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:33.266 [2024-11-04 14:36:32.186095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:33.266 [2024-11-04 14:36:32.186293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:33.266 [2024-11-04 14:36:32.186329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:33.266 [2024-11-04 14:36:32.186564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.266 pt1 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.266 "name": "raid_bdev1", 00:10:33.266 "uuid": "8533e7d4-91ce-485e-a386-8c4ebdf9a27d", 00:10:33.266 "strip_size_kb": 0, 00:10:33.266 "state": "online", 00:10:33.266 "raid_level": "raid1", 00:10:33.266 "superblock": true, 00:10:33.266 "num_base_bdevs": 2, 00:10:33.266 "num_base_bdevs_discovered": 1, 00:10:33.266 "num_base_bdevs_operational": 1, 00:10:33.266 "base_bdevs_list": [ 00:10:33.266 { 00:10:33.266 "name": null, 00:10:33.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.266 "is_configured": false, 00:10:33.266 "data_offset": 2048, 00:10:33.266 "data_size": 63488 00:10:33.266 }, 00:10:33.266 { 00:10:33.266 "name": "pt2", 00:10:33.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:33.266 "is_configured": true, 00:10:33.266 "data_offset": 2048, 00:10:33.266 "data_size": 63488 00:10:33.266 } 00:10:33.266 ] 00:10:33.266 }' 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.266 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:33.832 [2024-11-04 14:36:32.730904] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8533e7d4-91ce-485e-a386-8c4ebdf9a27d '!=' 8533e7d4-91ce-485e-a386-8c4ebdf9a27d ']' 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63186 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63186 ']' 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63186 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63186 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:33.832 killing process with pid 63186 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63186' 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63186 00:10:33.832 [2024-11-04 14:36:32.807209] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.832 14:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63186 00:10:33.832 [2024-11-04 14:36:32.807316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.832 [2024-11-04 14:36:32.807380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.832 [2024-11-04 14:36:32.807403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:34.090 [2024-11-04 14:36:32.983161] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.099 14:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:35.099 00:10:35.099 real 0m6.575s 00:10:35.099 user 0m10.510s 00:10:35.099 sys 0m0.947s 00:10:35.099 14:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:35.099 14:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.099 ************************************ 00:10:35.099 END TEST raid_superblock_test 00:10:35.099 ************************************ 00:10:35.099 14:36:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:10:35.099 14:36:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:35.099 14:36:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:35.099 14:36:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.099 ************************************ 00:10:35.099 START TEST raid_read_error_test 00:10:35.099 ************************************ 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:35.099 14:36:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:35.099 14:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wY9tzaswGP 00:10:35.099 14:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63522 00:10:35.099 14:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63522 00:10:35.099 14:36:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63522 ']' 00:10:35.099 14:36:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:35.099 14:36:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.099 14:36:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:35.099 14:36:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.099 14:36:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:35.099 14:36:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.099 [2024-11-04 14:36:34.113850] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:10:35.099 [2024-11-04 14:36:34.114065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63522 ] 00:10:35.357 [2024-11-04 14:36:34.299389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.357 [2024-11-04 14:36:34.420434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.615 [2024-11-04 14:36:34.602633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.615 [2024-11-04 14:36:34.602746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.181 BaseBdev1_malloc 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.181 true 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.181 [2024-11-04 14:36:35.084030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:36.181 [2024-11-04 14:36:35.084093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.181 [2024-11-04 14:36:35.084122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:36.181 [2024-11-04 14:36:35.084141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.181 [2024-11-04 14:36:35.087035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.181 [2024-11-04 14:36:35.087111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:36.181 BaseBdev1 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.181 BaseBdev2_malloc 00:10:36.181 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.182 true 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.182 [2024-11-04 14:36:35.144087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:36.182 [2024-11-04 14:36:35.144162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.182 [2024-11-04 14:36:35.144185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:36.182 [2024-11-04 14:36:35.144202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.182 [2024-11-04 14:36:35.146978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.182 [2024-11-04 14:36:35.147035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:36.182 BaseBdev2 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.182 [2024-11-04 14:36:35.152161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.182 [2024-11-04 14:36:35.154658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.182 [2024-11-04 14:36:35.154966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:36.182 [2024-11-04 14:36:35.155022] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:36.182 [2024-11-04 14:36:35.155338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:36.182 [2024-11-04 14:36:35.155604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:36.182 [2024-11-04 14:36:35.155629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:36.182 [2024-11-04 14:36:35.155811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.182 "name": "raid_bdev1", 00:10:36.182 "uuid": "8b64ac2d-9e29-4bdd-a73e-7725acfc07f4", 00:10:36.182 "strip_size_kb": 0, 00:10:36.182 "state": "online", 00:10:36.182 "raid_level": "raid1", 00:10:36.182 "superblock": true, 00:10:36.182 "num_base_bdevs": 2, 00:10:36.182 "num_base_bdevs_discovered": 2, 00:10:36.182 "num_base_bdevs_operational": 2, 00:10:36.182 "base_bdevs_list": [ 00:10:36.182 { 00:10:36.182 "name": "BaseBdev1", 00:10:36.182 "uuid": "39d632f4-52ef-5d34-a1e0-d28a59bb9aa1", 00:10:36.182 "is_configured": true, 00:10:36.182 "data_offset": 2048, 00:10:36.182 "data_size": 63488 00:10:36.182 }, 00:10:36.182 { 00:10:36.182 "name": "BaseBdev2", 00:10:36.182 "uuid": "d59cef6a-3e49-59f1-807d-2289b34a599d", 00:10:36.182 "is_configured": true, 00:10:36.182 "data_offset": 2048, 00:10:36.182 "data_size": 63488 00:10:36.182 } 00:10:36.182 ] 00:10:36.182 }' 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.182 14:36:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.748 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:36.748 14:36:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:36.748 [2024-11-04 14:36:35.773577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.682 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.683 "name": "raid_bdev1", 00:10:37.683 "uuid": "8b64ac2d-9e29-4bdd-a73e-7725acfc07f4", 00:10:37.683 "strip_size_kb": 0, 00:10:37.683 "state": "online", 00:10:37.683 "raid_level": "raid1", 00:10:37.683 "superblock": true, 00:10:37.683 "num_base_bdevs": 2, 00:10:37.683 "num_base_bdevs_discovered": 2, 00:10:37.683 "num_base_bdevs_operational": 2, 00:10:37.683 "base_bdevs_list": [ 00:10:37.683 { 00:10:37.683 "name": "BaseBdev1", 00:10:37.683 "uuid": "39d632f4-52ef-5d34-a1e0-d28a59bb9aa1", 00:10:37.683 "is_configured": true, 00:10:37.683 "data_offset": 2048, 00:10:37.683 "data_size": 63488 00:10:37.683 }, 00:10:37.683 { 00:10:37.683 "name": "BaseBdev2", 00:10:37.683 "uuid": "d59cef6a-3e49-59f1-807d-2289b34a599d", 00:10:37.683 "is_configured": true, 00:10:37.683 "data_offset": 2048, 00:10:37.683 "data_size": 63488 00:10:37.683 } 00:10:37.683 ] 00:10:37.683 }' 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.683 14:36:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.248 [2024-11-04 14:36:37.214999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:38.248 [2024-11-04 14:36:37.215048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.248 [2024-11-04 14:36:37.218337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.248 [2024-11-04 14:36:37.218402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.248 [2024-11-04 14:36:37.218506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.248 [2024-11-04 14:36:37.218542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:38.248 { 00:10:38.248 "results": [ 00:10:38.248 { 00:10:38.248 "job": "raid_bdev1", 00:10:38.248 "core_mask": "0x1", 00:10:38.248 "workload": "randrw", 00:10:38.248 "percentage": 50, 00:10:38.248 "status": "finished", 00:10:38.248 "queue_depth": 1, 00:10:38.248 "io_size": 131072, 00:10:38.248 "runtime": 1.439025, 00:10:38.248 "iops": 13093.587672208614, 00:10:38.248 "mibps": 1636.6984590260768, 00:10:38.248 "io_failed": 0, 00:10:38.248 "io_timeout": 0, 00:10:38.248 "avg_latency_us": 72.37222221150043, 00:10:38.248 "min_latency_us": 37.236363636363635, 00:10:38.248 "max_latency_us": 1787.3454545454545 00:10:38.248 } 00:10:38.248 ], 00:10:38.248 "core_count": 1 00:10:38.248 } 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63522 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63522 ']' 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63522 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63522 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:38.248 killing process with pid 63522 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63522' 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63522 00:10:38.248 [2024-11-04 14:36:37.255043] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.248 14:36:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63522 00:10:38.506 [2024-11-04 14:36:37.372916] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.440 14:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wY9tzaswGP 00:10:39.440 14:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:39.440 14:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:39.440 14:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:39.440 14:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:39.441 14:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.441 14:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:39.441 14:36:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:39.441 00:10:39.441 real 0m4.403s 00:10:39.441 user 0m5.507s 00:10:39.441 sys 0m0.527s 00:10:39.441 14:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:39.441 14:36:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.441 ************************************ 00:10:39.441 END TEST raid_read_error_test 00:10:39.441 ************************************ 00:10:39.441 14:36:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:10:39.441 14:36:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:39.441 14:36:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:39.441 14:36:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.441 ************************************ 00:10:39.441 START TEST raid_write_error_test 00:10:39.441 ************************************ 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eOQGz9tiCM 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63662 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63662 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63662 ']' 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:39.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.441 14:36:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:39.699 [2024-11-04 14:36:38.580483] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:10:39.699 [2024-11-04 14:36:38.580643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63662 ] 00:10:39.699 [2024-11-04 14:36:38.762483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.957 [2024-11-04 14:36:38.874717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.957 [2024-11-04 14:36:39.059442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.957 [2024-11-04 14:36:39.059495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.523 BaseBdev1_malloc 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.523 true 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.523 [2024-11-04 14:36:39.594924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:40.523 [2024-11-04 14:36:39.595000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.523 [2024-11-04 14:36:39.595028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:40.523 [2024-11-04 14:36:39.595045] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.523 [2024-11-04 14:36:39.597789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.523 [2024-11-04 14:36:39.597834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:40.523 BaseBdev1 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.523 BaseBdev2_malloc 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.523 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.781 true 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.781 [2024-11-04 14:36:39.651171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:40.781 [2024-11-04 14:36:39.651246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.781 [2024-11-04 14:36:39.651271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:40.781 [2024-11-04 14:36:39.651303] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.781 [2024-11-04 14:36:39.654100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.781 [2024-11-04 14:36:39.654144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:40.781 BaseBdev2 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.781 [2024-11-04 14:36:39.659257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.781 [2024-11-04 14:36:39.661688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.781 [2024-11-04 14:36:39.661960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:40.781 [2024-11-04 14:36:39.662004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.781 [2024-11-04 14:36:39.662302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:40.781 [2024-11-04 14:36:39.662544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:40.781 [2024-11-04 14:36:39.662571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:40.781 [2024-11-04 14:36:39.662756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.781 "name": "raid_bdev1", 00:10:40.781 "uuid": "c0a68998-3a65-440f-bba1-73184b4699f0", 00:10:40.781 "strip_size_kb": 0, 00:10:40.781 "state": "online", 00:10:40.781 "raid_level": "raid1", 00:10:40.781 "superblock": true, 00:10:40.781 "num_base_bdevs": 2, 00:10:40.781 "num_base_bdevs_discovered": 2, 00:10:40.781 "num_base_bdevs_operational": 2, 00:10:40.781 "base_bdevs_list": [ 00:10:40.781 { 00:10:40.781 "name": "BaseBdev1", 00:10:40.781 "uuid": "6412efb9-b242-5f8d-8884-0048d920bfc8", 00:10:40.781 "is_configured": true, 00:10:40.781 "data_offset": 2048, 00:10:40.781 "data_size": 63488 00:10:40.781 }, 00:10:40.781 { 00:10:40.781 "name": "BaseBdev2", 00:10:40.781 "uuid": "43204770-5e0b-5713-af92-b1b36b1c300f", 00:10:40.781 "is_configured": true, 00:10:40.781 "data_offset": 2048, 00:10:40.781 "data_size": 63488 00:10:40.781 } 00:10:40.781 ] 00:10:40.781 }' 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.781 14:36:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.361 14:36:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:41.361 14:36:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:41.361 [2024-11-04 14:36:40.304651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.307 [2024-11-04 14:36:41.188734] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:42.307 [2024-11-04 14:36:41.188813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.307 [2024-11-04 14:36:41.189042] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.307 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.308 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.308 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.308 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.308 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:42.308 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.308 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.308 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.308 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.308 "name": "raid_bdev1", 00:10:42.308 "uuid": "c0a68998-3a65-440f-bba1-73184b4699f0", 00:10:42.308 "strip_size_kb": 0, 00:10:42.308 "state": "online", 00:10:42.308 "raid_level": "raid1", 00:10:42.308 "superblock": true, 00:10:42.308 "num_base_bdevs": 2, 00:10:42.308 "num_base_bdevs_discovered": 1, 00:10:42.308 "num_base_bdevs_operational": 1, 00:10:42.308 "base_bdevs_list": [ 00:10:42.308 { 00:10:42.308 "name": null, 00:10:42.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.308 "is_configured": false, 00:10:42.308 "data_offset": 0, 00:10:42.308 "data_size": 63488 00:10:42.308 }, 00:10:42.308 { 00:10:42.308 "name": "BaseBdev2", 00:10:42.308 "uuid": "43204770-5e0b-5713-af92-b1b36b1c300f", 00:10:42.308 "is_configured": true, 00:10:42.308 "data_offset": 2048, 00:10:42.308 "data_size": 63488 00:10:42.308 } 00:10:42.308 ] 00:10:42.308 }' 00:10:42.308 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.308 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.873 [2024-11-04 14:36:41.716071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:42.873 [2024-11-04 14:36:41.716106] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.873 [2024-11-04 14:36:41.719425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.873 [2024-11-04 14:36:41.719492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.873 [2024-11-04 14:36:41.719570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.873 [2024-11-04 14:36:41.719588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:42.873 { 00:10:42.873 "results": [ 00:10:42.873 { 00:10:42.873 "job": "raid_bdev1", 00:10:42.873 "core_mask": "0x1", 00:10:42.873 "workload": "randrw", 00:10:42.873 "percentage": 50, 00:10:42.873 "status": "finished", 00:10:42.873 "queue_depth": 1, 00:10:42.873 "io_size": 131072, 00:10:42.873 "runtime": 1.408967, 00:10:42.873 "iops": 15522.719836589502, 00:10:42.873 "mibps": 1940.3399795736877, 00:10:42.873 "io_failed": 0, 00:10:42.873 "io_timeout": 0, 00:10:42.873 "avg_latency_us": 60.38124573428492, 00:10:42.873 "min_latency_us": 38.167272727272724, 00:10:42.873 "max_latency_us": 1832.0290909090909 00:10:42.873 } 00:10:42.873 ], 00:10:42.873 "core_count": 1 00:10:42.873 } 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63662 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63662 ']' 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63662 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63662 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:42.873 killing process with pid 63662 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63662' 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63662 00:10:42.873 14:36:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63662 00:10:42.873 [2024-11-04 14:36:41.757832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.873 [2024-11-04 14:36:41.874557] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.806 14:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eOQGz9tiCM 00:10:43.806 14:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:43.806 14:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:43.806 14:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:43.806 14:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:43.806 14:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.806 14:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:43.806 14:36:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:43.806 00:10:43.806 real 0m4.463s 00:10:43.806 user 0m5.616s 00:10:43.806 sys 0m0.548s 00:10:43.806 14:36:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:43.806 14:36:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.806 ************************************ 00:10:43.806 END TEST raid_write_error_test 00:10:43.806 ************************************ 00:10:44.064 14:36:42 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:44.065 14:36:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:44.065 14:36:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:44.065 14:36:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:44.065 14:36:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:44.065 14:36:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.065 ************************************ 00:10:44.065 START TEST raid_state_function_test 00:10:44.065 ************************************ 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63806 00:10:44.065 Process raid pid: 63806 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63806' 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63806 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63806 ']' 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:44.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:44.065 14:36:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.065 [2024-11-04 14:36:43.081873] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:10:44.065 [2024-11-04 14:36:43.082116] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.323 [2024-11-04 14:36:43.261349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.323 [2024-11-04 14:36:43.392827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.581 [2024-11-04 14:36:43.597062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.581 [2024-11-04 14:36:43.597132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.147 [2024-11-04 14:36:44.028416] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.147 [2024-11-04 14:36:44.028512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.147 [2024-11-04 14:36:44.028529] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.147 [2024-11-04 14:36:44.028545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.147 [2024-11-04 14:36:44.028555] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.147 [2024-11-04 14:36:44.028570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.147 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.148 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.148 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.148 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.148 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.148 "name": "Existed_Raid", 00:10:45.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.148 "strip_size_kb": 64, 00:10:45.148 "state": "configuring", 00:10:45.148 "raid_level": "raid0", 00:10:45.148 "superblock": false, 00:10:45.148 "num_base_bdevs": 3, 00:10:45.148 "num_base_bdevs_discovered": 0, 00:10:45.148 "num_base_bdevs_operational": 3, 00:10:45.148 "base_bdevs_list": [ 00:10:45.148 { 00:10:45.148 "name": "BaseBdev1", 00:10:45.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.148 "is_configured": false, 00:10:45.148 "data_offset": 0, 00:10:45.148 "data_size": 0 00:10:45.148 }, 00:10:45.148 { 00:10:45.148 "name": "BaseBdev2", 00:10:45.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.148 "is_configured": false, 00:10:45.148 "data_offset": 0, 00:10:45.148 "data_size": 0 00:10:45.148 }, 00:10:45.148 { 00:10:45.148 "name": "BaseBdev3", 00:10:45.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.148 "is_configured": false, 00:10:45.148 "data_offset": 0, 00:10:45.148 "data_size": 0 00:10:45.148 } 00:10:45.148 ] 00:10:45.148 }' 00:10:45.148 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.148 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.715 [2024-11-04 14:36:44.556470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.715 [2024-11-04 14:36:44.556548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.715 [2024-11-04 14:36:44.564470] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.715 [2024-11-04 14:36:44.564523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.715 [2024-11-04 14:36:44.564538] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.715 [2024-11-04 14:36:44.564554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.715 [2024-11-04 14:36:44.564563] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.715 [2024-11-04 14:36:44.564577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.715 [2024-11-04 14:36:44.610029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.715 BaseBdev1 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.715 [ 00:10:45.715 { 00:10:45.715 "name": "BaseBdev1", 00:10:45.715 "aliases": [ 00:10:45.715 "c8cc743a-4790-470f-ba55-854ca14317dd" 00:10:45.715 ], 00:10:45.715 "product_name": "Malloc disk", 00:10:45.715 "block_size": 512, 00:10:45.715 "num_blocks": 65536, 00:10:45.715 "uuid": "c8cc743a-4790-470f-ba55-854ca14317dd", 00:10:45.715 "assigned_rate_limits": { 00:10:45.715 "rw_ios_per_sec": 0, 00:10:45.715 "rw_mbytes_per_sec": 0, 00:10:45.715 "r_mbytes_per_sec": 0, 00:10:45.715 "w_mbytes_per_sec": 0 00:10:45.715 }, 00:10:45.715 "claimed": true, 00:10:45.715 "claim_type": "exclusive_write", 00:10:45.715 "zoned": false, 00:10:45.715 "supported_io_types": { 00:10:45.715 "read": true, 00:10:45.715 "write": true, 00:10:45.715 "unmap": true, 00:10:45.715 "flush": true, 00:10:45.715 "reset": true, 00:10:45.715 "nvme_admin": false, 00:10:45.715 "nvme_io": false, 00:10:45.715 "nvme_io_md": false, 00:10:45.715 "write_zeroes": true, 00:10:45.715 "zcopy": true, 00:10:45.715 "get_zone_info": false, 00:10:45.715 "zone_management": false, 00:10:45.715 "zone_append": false, 00:10:45.715 "compare": false, 00:10:45.715 "compare_and_write": false, 00:10:45.715 "abort": true, 00:10:45.715 "seek_hole": false, 00:10:45.715 "seek_data": false, 00:10:45.715 "copy": true, 00:10:45.715 "nvme_iov_md": false 00:10:45.715 }, 00:10:45.715 "memory_domains": [ 00:10:45.715 { 00:10:45.715 "dma_device_id": "system", 00:10:45.715 "dma_device_type": 1 00:10:45.715 }, 00:10:45.715 { 00:10:45.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.715 "dma_device_type": 2 00:10:45.715 } 00:10:45.715 ], 00:10:45.715 "driver_specific": {} 00:10:45.715 } 00:10:45.715 ] 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.715 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.715 "name": "Existed_Raid", 00:10:45.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.715 "strip_size_kb": 64, 00:10:45.715 "state": "configuring", 00:10:45.715 "raid_level": "raid0", 00:10:45.715 "superblock": false, 00:10:45.715 "num_base_bdevs": 3, 00:10:45.715 "num_base_bdevs_discovered": 1, 00:10:45.715 "num_base_bdevs_operational": 3, 00:10:45.715 "base_bdevs_list": [ 00:10:45.715 { 00:10:45.715 "name": "BaseBdev1", 00:10:45.715 "uuid": "c8cc743a-4790-470f-ba55-854ca14317dd", 00:10:45.715 "is_configured": true, 00:10:45.715 "data_offset": 0, 00:10:45.715 "data_size": 65536 00:10:45.715 }, 00:10:45.715 { 00:10:45.715 "name": "BaseBdev2", 00:10:45.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.715 "is_configured": false, 00:10:45.715 "data_offset": 0, 00:10:45.716 "data_size": 0 00:10:45.716 }, 00:10:45.716 { 00:10:45.716 "name": "BaseBdev3", 00:10:45.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.716 "is_configured": false, 00:10:45.716 "data_offset": 0, 00:10:45.716 "data_size": 0 00:10:45.716 } 00:10:45.716 ] 00:10:45.716 }' 00:10:45.716 14:36:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.716 14:36:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.282 [2024-11-04 14:36:45.142192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.282 [2024-11-04 14:36:45.142257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.282 [2024-11-04 14:36:45.150233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.282 [2024-11-04 14:36:45.152694] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.282 [2024-11-04 14:36:45.152763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.282 [2024-11-04 14:36:45.152779] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.282 [2024-11-04 14:36:45.152795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.282 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.282 "name": "Existed_Raid", 00:10:46.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.282 "strip_size_kb": 64, 00:10:46.282 "state": "configuring", 00:10:46.282 "raid_level": "raid0", 00:10:46.282 "superblock": false, 00:10:46.282 "num_base_bdevs": 3, 00:10:46.282 "num_base_bdevs_discovered": 1, 00:10:46.282 "num_base_bdevs_operational": 3, 00:10:46.282 "base_bdevs_list": [ 00:10:46.282 { 00:10:46.282 "name": "BaseBdev1", 00:10:46.282 "uuid": "c8cc743a-4790-470f-ba55-854ca14317dd", 00:10:46.282 "is_configured": true, 00:10:46.282 "data_offset": 0, 00:10:46.282 "data_size": 65536 00:10:46.282 }, 00:10:46.282 { 00:10:46.282 "name": "BaseBdev2", 00:10:46.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.282 "is_configured": false, 00:10:46.282 "data_offset": 0, 00:10:46.282 "data_size": 0 00:10:46.282 }, 00:10:46.282 { 00:10:46.282 "name": "BaseBdev3", 00:10:46.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.282 "is_configured": false, 00:10:46.283 "data_offset": 0, 00:10:46.283 "data_size": 0 00:10:46.283 } 00:10:46.283 ] 00:10:46.283 }' 00:10:46.283 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.283 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.850 [2024-11-04 14:36:45.712392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.850 BaseBdev2 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.850 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.850 [ 00:10:46.850 { 00:10:46.850 "name": "BaseBdev2", 00:10:46.850 "aliases": [ 00:10:46.850 "0b2c18f1-6619-4711-9e9a-78d9c5c43586" 00:10:46.850 ], 00:10:46.850 "product_name": "Malloc disk", 00:10:46.850 "block_size": 512, 00:10:46.850 "num_blocks": 65536, 00:10:46.850 "uuid": "0b2c18f1-6619-4711-9e9a-78d9c5c43586", 00:10:46.850 "assigned_rate_limits": { 00:10:46.850 "rw_ios_per_sec": 0, 00:10:46.850 "rw_mbytes_per_sec": 0, 00:10:46.850 "r_mbytes_per_sec": 0, 00:10:46.850 "w_mbytes_per_sec": 0 00:10:46.850 }, 00:10:46.850 "claimed": true, 00:10:46.850 "claim_type": "exclusive_write", 00:10:46.850 "zoned": false, 00:10:46.850 "supported_io_types": { 00:10:46.850 "read": true, 00:10:46.850 "write": true, 00:10:46.850 "unmap": true, 00:10:46.850 "flush": true, 00:10:46.850 "reset": true, 00:10:46.850 "nvme_admin": false, 00:10:46.850 "nvme_io": false, 00:10:46.850 "nvme_io_md": false, 00:10:46.850 "write_zeroes": true, 00:10:46.850 "zcopy": true, 00:10:46.850 "get_zone_info": false, 00:10:46.850 "zone_management": false, 00:10:46.850 "zone_append": false, 00:10:46.850 "compare": false, 00:10:46.850 "compare_and_write": false, 00:10:46.850 "abort": true, 00:10:46.851 "seek_hole": false, 00:10:46.851 "seek_data": false, 00:10:46.851 "copy": true, 00:10:46.851 "nvme_iov_md": false 00:10:46.851 }, 00:10:46.851 "memory_domains": [ 00:10:46.851 { 00:10:46.851 "dma_device_id": "system", 00:10:46.851 "dma_device_type": 1 00:10:46.851 }, 00:10:46.851 { 00:10:46.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.851 "dma_device_type": 2 00:10:46.851 } 00:10:46.851 ], 00:10:46.851 "driver_specific": {} 00:10:46.851 } 00:10:46.851 ] 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.851 "name": "Existed_Raid", 00:10:46.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.851 "strip_size_kb": 64, 00:10:46.851 "state": "configuring", 00:10:46.851 "raid_level": "raid0", 00:10:46.851 "superblock": false, 00:10:46.851 "num_base_bdevs": 3, 00:10:46.851 "num_base_bdevs_discovered": 2, 00:10:46.851 "num_base_bdevs_operational": 3, 00:10:46.851 "base_bdevs_list": [ 00:10:46.851 { 00:10:46.851 "name": "BaseBdev1", 00:10:46.851 "uuid": "c8cc743a-4790-470f-ba55-854ca14317dd", 00:10:46.851 "is_configured": true, 00:10:46.851 "data_offset": 0, 00:10:46.851 "data_size": 65536 00:10:46.851 }, 00:10:46.851 { 00:10:46.851 "name": "BaseBdev2", 00:10:46.851 "uuid": "0b2c18f1-6619-4711-9e9a-78d9c5c43586", 00:10:46.851 "is_configured": true, 00:10:46.851 "data_offset": 0, 00:10:46.851 "data_size": 65536 00:10:46.851 }, 00:10:46.851 { 00:10:46.851 "name": "BaseBdev3", 00:10:46.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.851 "is_configured": false, 00:10:46.851 "data_offset": 0, 00:10:46.851 "data_size": 0 00:10:46.851 } 00:10:46.851 ] 00:10:46.851 }' 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.851 14:36:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.418 [2024-11-04 14:36:46.313442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.418 [2024-11-04 14:36:46.313504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.418 [2024-11-04 14:36:46.313541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:47.418 [2024-11-04 14:36:46.313909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:47.418 [2024-11-04 14:36:46.314167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.418 [2024-11-04 14:36:46.314194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:47.418 [2024-11-04 14:36:46.314514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.418 BaseBdev3 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.418 [ 00:10:47.418 { 00:10:47.418 "name": "BaseBdev3", 00:10:47.418 "aliases": [ 00:10:47.418 "109669b3-0f11-4927-b538-e5fe01b3495e" 00:10:47.418 ], 00:10:47.418 "product_name": "Malloc disk", 00:10:47.418 "block_size": 512, 00:10:47.418 "num_blocks": 65536, 00:10:47.418 "uuid": "109669b3-0f11-4927-b538-e5fe01b3495e", 00:10:47.418 "assigned_rate_limits": { 00:10:47.418 "rw_ios_per_sec": 0, 00:10:47.418 "rw_mbytes_per_sec": 0, 00:10:47.418 "r_mbytes_per_sec": 0, 00:10:47.418 "w_mbytes_per_sec": 0 00:10:47.418 }, 00:10:47.418 "claimed": true, 00:10:47.418 "claim_type": "exclusive_write", 00:10:47.418 "zoned": false, 00:10:47.418 "supported_io_types": { 00:10:47.418 "read": true, 00:10:47.418 "write": true, 00:10:47.418 "unmap": true, 00:10:47.418 "flush": true, 00:10:47.418 "reset": true, 00:10:47.418 "nvme_admin": false, 00:10:47.418 "nvme_io": false, 00:10:47.418 "nvme_io_md": false, 00:10:47.418 "write_zeroes": true, 00:10:47.418 "zcopy": true, 00:10:47.418 "get_zone_info": false, 00:10:47.418 "zone_management": false, 00:10:47.418 "zone_append": false, 00:10:47.418 "compare": false, 00:10:47.418 "compare_and_write": false, 00:10:47.418 "abort": true, 00:10:47.418 "seek_hole": false, 00:10:47.418 "seek_data": false, 00:10:47.418 "copy": true, 00:10:47.418 "nvme_iov_md": false 00:10:47.418 }, 00:10:47.418 "memory_domains": [ 00:10:47.418 { 00:10:47.418 "dma_device_id": "system", 00:10:47.418 "dma_device_type": 1 00:10:47.418 }, 00:10:47.418 { 00:10:47.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.418 "dma_device_type": 2 00:10:47.418 } 00:10:47.418 ], 00:10:47.418 "driver_specific": {} 00:10:47.418 } 00:10:47.418 ] 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.418 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.419 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.419 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.419 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.419 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.419 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.419 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.419 "name": "Existed_Raid", 00:10:47.419 "uuid": "44c34606-9cca-42c5-8d19-a47cefc6daac", 00:10:47.419 "strip_size_kb": 64, 00:10:47.419 "state": "online", 00:10:47.419 "raid_level": "raid0", 00:10:47.419 "superblock": false, 00:10:47.419 "num_base_bdevs": 3, 00:10:47.419 "num_base_bdevs_discovered": 3, 00:10:47.419 "num_base_bdevs_operational": 3, 00:10:47.419 "base_bdevs_list": [ 00:10:47.419 { 00:10:47.419 "name": "BaseBdev1", 00:10:47.419 "uuid": "c8cc743a-4790-470f-ba55-854ca14317dd", 00:10:47.419 "is_configured": true, 00:10:47.419 "data_offset": 0, 00:10:47.419 "data_size": 65536 00:10:47.419 }, 00:10:47.419 { 00:10:47.419 "name": "BaseBdev2", 00:10:47.419 "uuid": "0b2c18f1-6619-4711-9e9a-78d9c5c43586", 00:10:47.419 "is_configured": true, 00:10:47.419 "data_offset": 0, 00:10:47.419 "data_size": 65536 00:10:47.419 }, 00:10:47.419 { 00:10:47.419 "name": "BaseBdev3", 00:10:47.419 "uuid": "109669b3-0f11-4927-b538-e5fe01b3495e", 00:10:47.419 "is_configured": true, 00:10:47.419 "data_offset": 0, 00:10:47.419 "data_size": 65536 00:10:47.419 } 00:10:47.419 ] 00:10:47.419 }' 00:10:47.419 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.419 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.985 [2024-11-04 14:36:46.850077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.985 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.985 "name": "Existed_Raid", 00:10:47.985 "aliases": [ 00:10:47.985 "44c34606-9cca-42c5-8d19-a47cefc6daac" 00:10:47.985 ], 00:10:47.985 "product_name": "Raid Volume", 00:10:47.985 "block_size": 512, 00:10:47.985 "num_blocks": 196608, 00:10:47.985 "uuid": "44c34606-9cca-42c5-8d19-a47cefc6daac", 00:10:47.985 "assigned_rate_limits": { 00:10:47.985 "rw_ios_per_sec": 0, 00:10:47.985 "rw_mbytes_per_sec": 0, 00:10:47.985 "r_mbytes_per_sec": 0, 00:10:47.985 "w_mbytes_per_sec": 0 00:10:47.985 }, 00:10:47.985 "claimed": false, 00:10:47.985 "zoned": false, 00:10:47.985 "supported_io_types": { 00:10:47.985 "read": true, 00:10:47.985 "write": true, 00:10:47.985 "unmap": true, 00:10:47.985 "flush": true, 00:10:47.985 "reset": true, 00:10:47.985 "nvme_admin": false, 00:10:47.985 "nvme_io": false, 00:10:47.985 "nvme_io_md": false, 00:10:47.985 "write_zeroes": true, 00:10:47.985 "zcopy": false, 00:10:47.985 "get_zone_info": false, 00:10:47.985 "zone_management": false, 00:10:47.985 "zone_append": false, 00:10:47.985 "compare": false, 00:10:47.985 "compare_and_write": false, 00:10:47.985 "abort": false, 00:10:47.985 "seek_hole": false, 00:10:47.985 "seek_data": false, 00:10:47.985 "copy": false, 00:10:47.985 "nvme_iov_md": false 00:10:47.985 }, 00:10:47.985 "memory_domains": [ 00:10:47.985 { 00:10:47.985 "dma_device_id": "system", 00:10:47.985 "dma_device_type": 1 00:10:47.985 }, 00:10:47.985 { 00:10:47.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.985 "dma_device_type": 2 00:10:47.985 }, 00:10:47.985 { 00:10:47.985 "dma_device_id": "system", 00:10:47.985 "dma_device_type": 1 00:10:47.985 }, 00:10:47.985 { 00:10:47.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.985 "dma_device_type": 2 00:10:47.985 }, 00:10:47.985 { 00:10:47.985 "dma_device_id": "system", 00:10:47.985 "dma_device_type": 1 00:10:47.985 }, 00:10:47.985 { 00:10:47.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.985 "dma_device_type": 2 00:10:47.985 } 00:10:47.985 ], 00:10:47.985 "driver_specific": { 00:10:47.985 "raid": { 00:10:47.985 "uuid": "44c34606-9cca-42c5-8d19-a47cefc6daac", 00:10:47.985 "strip_size_kb": 64, 00:10:47.985 "state": "online", 00:10:47.985 "raid_level": "raid0", 00:10:47.985 "superblock": false, 00:10:47.985 "num_base_bdevs": 3, 00:10:47.985 "num_base_bdevs_discovered": 3, 00:10:47.985 "num_base_bdevs_operational": 3, 00:10:47.985 "base_bdevs_list": [ 00:10:47.985 { 00:10:47.985 "name": "BaseBdev1", 00:10:47.985 "uuid": "c8cc743a-4790-470f-ba55-854ca14317dd", 00:10:47.985 "is_configured": true, 00:10:47.985 "data_offset": 0, 00:10:47.985 "data_size": 65536 00:10:47.985 }, 00:10:47.985 { 00:10:47.985 "name": "BaseBdev2", 00:10:47.985 "uuid": "0b2c18f1-6619-4711-9e9a-78d9c5c43586", 00:10:47.985 "is_configured": true, 00:10:47.985 "data_offset": 0, 00:10:47.985 "data_size": 65536 00:10:47.985 }, 00:10:47.985 { 00:10:47.985 "name": "BaseBdev3", 00:10:47.985 "uuid": "109669b3-0f11-4927-b538-e5fe01b3495e", 00:10:47.985 "is_configured": true, 00:10:47.985 "data_offset": 0, 00:10:47.985 "data_size": 65536 00:10:47.985 } 00:10:47.985 ] 00:10:47.985 } 00:10:47.985 } 00:10:47.985 }' 00:10:47.986 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.986 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:47.986 BaseBdev2 00:10:47.986 BaseBdev3' 00:10:47.986 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.986 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.986 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.986 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:47.986 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.986 14:36:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.986 14:36:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.986 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.244 [2024-11-04 14:36:47.157775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.244 [2024-11-04 14:36:47.157812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.244 [2024-11-04 14:36:47.157895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.244 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.244 "name": "Existed_Raid", 00:10:48.244 "uuid": "44c34606-9cca-42c5-8d19-a47cefc6daac", 00:10:48.244 "strip_size_kb": 64, 00:10:48.244 "state": "offline", 00:10:48.244 "raid_level": "raid0", 00:10:48.244 "superblock": false, 00:10:48.244 "num_base_bdevs": 3, 00:10:48.244 "num_base_bdevs_discovered": 2, 00:10:48.244 "num_base_bdevs_operational": 2, 00:10:48.244 "base_bdevs_list": [ 00:10:48.244 { 00:10:48.244 "name": null, 00:10:48.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.244 "is_configured": false, 00:10:48.244 "data_offset": 0, 00:10:48.244 "data_size": 65536 00:10:48.244 }, 00:10:48.244 { 00:10:48.244 "name": "BaseBdev2", 00:10:48.245 "uuid": "0b2c18f1-6619-4711-9e9a-78d9c5c43586", 00:10:48.245 "is_configured": true, 00:10:48.245 "data_offset": 0, 00:10:48.245 "data_size": 65536 00:10:48.245 }, 00:10:48.245 { 00:10:48.245 "name": "BaseBdev3", 00:10:48.245 "uuid": "109669b3-0f11-4927-b538-e5fe01b3495e", 00:10:48.245 "is_configured": true, 00:10:48.245 "data_offset": 0, 00:10:48.245 "data_size": 65536 00:10:48.245 } 00:10:48.245 ] 00:10:48.245 }' 00:10:48.245 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.245 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.811 [2024-11-04 14:36:47.827099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.811 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.070 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:49.070 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.070 14:36:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:49.070 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.070 14:36:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.070 [2024-11-04 14:36:47.975058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:49.070 [2024-11-04 14:36:47.975122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.070 BaseBdev2 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.070 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.070 [ 00:10:49.070 { 00:10:49.070 "name": "BaseBdev2", 00:10:49.070 "aliases": [ 00:10:49.070 "81643711-3433-445b-8fa1-726ea267606f" 00:10:49.070 ], 00:10:49.070 "product_name": "Malloc disk", 00:10:49.070 "block_size": 512, 00:10:49.070 "num_blocks": 65536, 00:10:49.070 "uuid": "81643711-3433-445b-8fa1-726ea267606f", 00:10:49.070 "assigned_rate_limits": { 00:10:49.070 "rw_ios_per_sec": 0, 00:10:49.070 "rw_mbytes_per_sec": 0, 00:10:49.070 "r_mbytes_per_sec": 0, 00:10:49.070 "w_mbytes_per_sec": 0 00:10:49.070 }, 00:10:49.070 "claimed": false, 00:10:49.070 "zoned": false, 00:10:49.070 "supported_io_types": { 00:10:49.070 "read": true, 00:10:49.070 "write": true, 00:10:49.070 "unmap": true, 00:10:49.070 "flush": true, 00:10:49.070 "reset": true, 00:10:49.070 "nvme_admin": false, 00:10:49.070 "nvme_io": false, 00:10:49.070 "nvme_io_md": false, 00:10:49.070 "write_zeroes": true, 00:10:49.070 "zcopy": true, 00:10:49.070 "get_zone_info": false, 00:10:49.070 "zone_management": false, 00:10:49.070 "zone_append": false, 00:10:49.329 "compare": false, 00:10:49.329 "compare_and_write": false, 00:10:49.329 "abort": true, 00:10:49.329 "seek_hole": false, 00:10:49.329 "seek_data": false, 00:10:49.329 "copy": true, 00:10:49.329 "nvme_iov_md": false 00:10:49.329 }, 00:10:49.329 "memory_domains": [ 00:10:49.329 { 00:10:49.329 "dma_device_id": "system", 00:10:49.329 "dma_device_type": 1 00:10:49.329 }, 00:10:49.329 { 00:10:49.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.329 "dma_device_type": 2 00:10:49.329 } 00:10:49.329 ], 00:10:49.329 "driver_specific": {} 00:10:49.329 } 00:10:49.329 ] 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.329 BaseBdev3 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.329 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.329 [ 00:10:49.329 { 00:10:49.329 "name": "BaseBdev3", 00:10:49.329 "aliases": [ 00:10:49.330 "e7a3b18a-7240-4728-bb82-79e1161deb12" 00:10:49.330 ], 00:10:49.330 "product_name": "Malloc disk", 00:10:49.330 "block_size": 512, 00:10:49.330 "num_blocks": 65536, 00:10:49.330 "uuid": "e7a3b18a-7240-4728-bb82-79e1161deb12", 00:10:49.330 "assigned_rate_limits": { 00:10:49.330 "rw_ios_per_sec": 0, 00:10:49.330 "rw_mbytes_per_sec": 0, 00:10:49.330 "r_mbytes_per_sec": 0, 00:10:49.330 "w_mbytes_per_sec": 0 00:10:49.330 }, 00:10:49.330 "claimed": false, 00:10:49.330 "zoned": false, 00:10:49.330 "supported_io_types": { 00:10:49.330 "read": true, 00:10:49.330 "write": true, 00:10:49.330 "unmap": true, 00:10:49.330 "flush": true, 00:10:49.330 "reset": true, 00:10:49.330 "nvme_admin": false, 00:10:49.330 "nvme_io": false, 00:10:49.330 "nvme_io_md": false, 00:10:49.330 "write_zeroes": true, 00:10:49.330 "zcopy": true, 00:10:49.330 "get_zone_info": false, 00:10:49.330 "zone_management": false, 00:10:49.330 "zone_append": false, 00:10:49.330 "compare": false, 00:10:49.330 "compare_and_write": false, 00:10:49.330 "abort": true, 00:10:49.330 "seek_hole": false, 00:10:49.330 "seek_data": false, 00:10:49.330 "copy": true, 00:10:49.330 "nvme_iov_md": false 00:10:49.330 }, 00:10:49.330 "memory_domains": [ 00:10:49.330 { 00:10:49.330 "dma_device_id": "system", 00:10:49.330 "dma_device_type": 1 00:10:49.330 }, 00:10:49.330 { 00:10:49.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.330 "dma_device_type": 2 00:10:49.330 } 00:10:49.330 ], 00:10:49.330 "driver_specific": {} 00:10:49.330 } 00:10:49.330 ] 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.330 [2024-11-04 14:36:48.271147] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.330 [2024-11-04 14:36:48.271201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.330 [2024-11-04 14:36:48.271231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.330 [2024-11-04 14:36:48.273552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.330 "name": "Existed_Raid", 00:10:49.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.330 "strip_size_kb": 64, 00:10:49.330 "state": "configuring", 00:10:49.330 "raid_level": "raid0", 00:10:49.330 "superblock": false, 00:10:49.330 "num_base_bdevs": 3, 00:10:49.330 "num_base_bdevs_discovered": 2, 00:10:49.330 "num_base_bdevs_operational": 3, 00:10:49.330 "base_bdevs_list": [ 00:10:49.330 { 00:10:49.330 "name": "BaseBdev1", 00:10:49.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.330 "is_configured": false, 00:10:49.330 "data_offset": 0, 00:10:49.330 "data_size": 0 00:10:49.330 }, 00:10:49.330 { 00:10:49.330 "name": "BaseBdev2", 00:10:49.330 "uuid": "81643711-3433-445b-8fa1-726ea267606f", 00:10:49.330 "is_configured": true, 00:10:49.330 "data_offset": 0, 00:10:49.330 "data_size": 65536 00:10:49.330 }, 00:10:49.330 { 00:10:49.330 "name": "BaseBdev3", 00:10:49.330 "uuid": "e7a3b18a-7240-4728-bb82-79e1161deb12", 00:10:49.330 "is_configured": true, 00:10:49.330 "data_offset": 0, 00:10:49.330 "data_size": 65536 00:10:49.330 } 00:10:49.330 ] 00:10:49.330 }' 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.330 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.897 [2024-11-04 14:36:48.791283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.897 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.898 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.898 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.898 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.898 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.898 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.898 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.898 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.898 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.898 "name": "Existed_Raid", 00:10:49.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.898 "strip_size_kb": 64, 00:10:49.898 "state": "configuring", 00:10:49.898 "raid_level": "raid0", 00:10:49.898 "superblock": false, 00:10:49.898 "num_base_bdevs": 3, 00:10:49.898 "num_base_bdevs_discovered": 1, 00:10:49.898 "num_base_bdevs_operational": 3, 00:10:49.898 "base_bdevs_list": [ 00:10:49.898 { 00:10:49.898 "name": "BaseBdev1", 00:10:49.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.898 "is_configured": false, 00:10:49.898 "data_offset": 0, 00:10:49.898 "data_size": 0 00:10:49.898 }, 00:10:49.898 { 00:10:49.898 "name": null, 00:10:49.898 "uuid": "81643711-3433-445b-8fa1-726ea267606f", 00:10:49.898 "is_configured": false, 00:10:49.898 "data_offset": 0, 00:10:49.898 "data_size": 65536 00:10:49.898 }, 00:10:49.898 { 00:10:49.898 "name": "BaseBdev3", 00:10:49.898 "uuid": "e7a3b18a-7240-4728-bb82-79e1161deb12", 00:10:49.898 "is_configured": true, 00:10:49.898 "data_offset": 0, 00:10:49.898 "data_size": 65536 00:10:49.898 } 00:10:49.898 ] 00:10:49.898 }' 00:10:49.898 14:36:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.898 14:36:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.465 [2024-11-04 14:36:49.398680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.465 BaseBdev1 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.465 [ 00:10:50.465 { 00:10:50.465 "name": "BaseBdev1", 00:10:50.465 "aliases": [ 00:10:50.465 "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019" 00:10:50.465 ], 00:10:50.465 "product_name": "Malloc disk", 00:10:50.465 "block_size": 512, 00:10:50.465 "num_blocks": 65536, 00:10:50.465 "uuid": "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019", 00:10:50.465 "assigned_rate_limits": { 00:10:50.465 "rw_ios_per_sec": 0, 00:10:50.465 "rw_mbytes_per_sec": 0, 00:10:50.465 "r_mbytes_per_sec": 0, 00:10:50.465 "w_mbytes_per_sec": 0 00:10:50.465 }, 00:10:50.465 "claimed": true, 00:10:50.465 "claim_type": "exclusive_write", 00:10:50.465 "zoned": false, 00:10:50.465 "supported_io_types": { 00:10:50.465 "read": true, 00:10:50.465 "write": true, 00:10:50.465 "unmap": true, 00:10:50.465 "flush": true, 00:10:50.465 "reset": true, 00:10:50.465 "nvme_admin": false, 00:10:50.465 "nvme_io": false, 00:10:50.465 "nvme_io_md": false, 00:10:50.465 "write_zeroes": true, 00:10:50.465 "zcopy": true, 00:10:50.465 "get_zone_info": false, 00:10:50.465 "zone_management": false, 00:10:50.465 "zone_append": false, 00:10:50.465 "compare": false, 00:10:50.465 "compare_and_write": false, 00:10:50.465 "abort": true, 00:10:50.465 "seek_hole": false, 00:10:50.465 "seek_data": false, 00:10:50.465 "copy": true, 00:10:50.465 "nvme_iov_md": false 00:10:50.465 }, 00:10:50.465 "memory_domains": [ 00:10:50.465 { 00:10:50.465 "dma_device_id": "system", 00:10:50.465 "dma_device_type": 1 00:10:50.465 }, 00:10:50.465 { 00:10:50.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.465 "dma_device_type": 2 00:10:50.465 } 00:10:50.465 ], 00:10:50.465 "driver_specific": {} 00:10:50.465 } 00:10:50.465 ] 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.465 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.466 "name": "Existed_Raid", 00:10:50.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.466 "strip_size_kb": 64, 00:10:50.466 "state": "configuring", 00:10:50.466 "raid_level": "raid0", 00:10:50.466 "superblock": false, 00:10:50.466 "num_base_bdevs": 3, 00:10:50.466 "num_base_bdevs_discovered": 2, 00:10:50.466 "num_base_bdevs_operational": 3, 00:10:50.466 "base_bdevs_list": [ 00:10:50.466 { 00:10:50.466 "name": "BaseBdev1", 00:10:50.466 "uuid": "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019", 00:10:50.466 "is_configured": true, 00:10:50.466 "data_offset": 0, 00:10:50.466 "data_size": 65536 00:10:50.466 }, 00:10:50.466 { 00:10:50.466 "name": null, 00:10:50.466 "uuid": "81643711-3433-445b-8fa1-726ea267606f", 00:10:50.466 "is_configured": false, 00:10:50.466 "data_offset": 0, 00:10:50.466 "data_size": 65536 00:10:50.466 }, 00:10:50.466 { 00:10:50.466 "name": "BaseBdev3", 00:10:50.466 "uuid": "e7a3b18a-7240-4728-bb82-79e1161deb12", 00:10:50.466 "is_configured": true, 00:10:50.466 "data_offset": 0, 00:10:50.466 "data_size": 65536 00:10:50.466 } 00:10:50.466 ] 00:10:50.466 }' 00:10:50.466 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.466 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.033 [2024-11-04 14:36:49.982888] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.033 14:36:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.033 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.033 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.033 "name": "Existed_Raid", 00:10:51.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.033 "strip_size_kb": 64, 00:10:51.033 "state": "configuring", 00:10:51.033 "raid_level": "raid0", 00:10:51.033 "superblock": false, 00:10:51.033 "num_base_bdevs": 3, 00:10:51.033 "num_base_bdevs_discovered": 1, 00:10:51.033 "num_base_bdevs_operational": 3, 00:10:51.033 "base_bdevs_list": [ 00:10:51.033 { 00:10:51.033 "name": "BaseBdev1", 00:10:51.033 "uuid": "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019", 00:10:51.033 "is_configured": true, 00:10:51.033 "data_offset": 0, 00:10:51.033 "data_size": 65536 00:10:51.033 }, 00:10:51.033 { 00:10:51.033 "name": null, 00:10:51.033 "uuid": "81643711-3433-445b-8fa1-726ea267606f", 00:10:51.033 "is_configured": false, 00:10:51.033 "data_offset": 0, 00:10:51.033 "data_size": 65536 00:10:51.033 }, 00:10:51.033 { 00:10:51.033 "name": null, 00:10:51.033 "uuid": "e7a3b18a-7240-4728-bb82-79e1161deb12", 00:10:51.033 "is_configured": false, 00:10:51.033 "data_offset": 0, 00:10:51.033 "data_size": 65536 00:10:51.033 } 00:10:51.033 ] 00:10:51.033 }' 00:10:51.033 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.033 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.599 [2024-11-04 14:36:50.567070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.599 "name": "Existed_Raid", 00:10:51.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.599 "strip_size_kb": 64, 00:10:51.599 "state": "configuring", 00:10:51.599 "raid_level": "raid0", 00:10:51.599 "superblock": false, 00:10:51.599 "num_base_bdevs": 3, 00:10:51.599 "num_base_bdevs_discovered": 2, 00:10:51.599 "num_base_bdevs_operational": 3, 00:10:51.599 "base_bdevs_list": [ 00:10:51.599 { 00:10:51.599 "name": "BaseBdev1", 00:10:51.599 "uuid": "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019", 00:10:51.599 "is_configured": true, 00:10:51.599 "data_offset": 0, 00:10:51.599 "data_size": 65536 00:10:51.599 }, 00:10:51.599 { 00:10:51.599 "name": null, 00:10:51.599 "uuid": "81643711-3433-445b-8fa1-726ea267606f", 00:10:51.599 "is_configured": false, 00:10:51.599 "data_offset": 0, 00:10:51.599 "data_size": 65536 00:10:51.599 }, 00:10:51.599 { 00:10:51.599 "name": "BaseBdev3", 00:10:51.599 "uuid": "e7a3b18a-7240-4728-bb82-79e1161deb12", 00:10:51.599 "is_configured": true, 00:10:51.599 "data_offset": 0, 00:10:51.599 "data_size": 65536 00:10:51.599 } 00:10:51.599 ] 00:10:51.599 }' 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.599 14:36:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.166 [2024-11-04 14:36:51.147338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.166 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.424 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.424 "name": "Existed_Raid", 00:10:52.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.424 "strip_size_kb": 64, 00:10:52.424 "state": "configuring", 00:10:52.424 "raid_level": "raid0", 00:10:52.424 "superblock": false, 00:10:52.424 "num_base_bdevs": 3, 00:10:52.424 "num_base_bdevs_discovered": 1, 00:10:52.424 "num_base_bdevs_operational": 3, 00:10:52.424 "base_bdevs_list": [ 00:10:52.424 { 00:10:52.424 "name": null, 00:10:52.424 "uuid": "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019", 00:10:52.424 "is_configured": false, 00:10:52.424 "data_offset": 0, 00:10:52.424 "data_size": 65536 00:10:52.424 }, 00:10:52.424 { 00:10:52.424 "name": null, 00:10:52.424 "uuid": "81643711-3433-445b-8fa1-726ea267606f", 00:10:52.424 "is_configured": false, 00:10:52.424 "data_offset": 0, 00:10:52.424 "data_size": 65536 00:10:52.424 }, 00:10:52.424 { 00:10:52.424 "name": "BaseBdev3", 00:10:52.424 "uuid": "e7a3b18a-7240-4728-bb82-79e1161deb12", 00:10:52.424 "is_configured": true, 00:10:52.424 "data_offset": 0, 00:10:52.424 "data_size": 65536 00:10:52.424 } 00:10:52.424 ] 00:10:52.424 }' 00:10:52.424 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.424 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.707 [2024-11-04 14:36:51.787237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.707 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.965 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.965 "name": "Existed_Raid", 00:10:52.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.965 "strip_size_kb": 64, 00:10:52.965 "state": "configuring", 00:10:52.965 "raid_level": "raid0", 00:10:52.965 "superblock": false, 00:10:52.965 "num_base_bdevs": 3, 00:10:52.965 "num_base_bdevs_discovered": 2, 00:10:52.965 "num_base_bdevs_operational": 3, 00:10:52.965 "base_bdevs_list": [ 00:10:52.965 { 00:10:52.965 "name": null, 00:10:52.965 "uuid": "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019", 00:10:52.965 "is_configured": false, 00:10:52.965 "data_offset": 0, 00:10:52.965 "data_size": 65536 00:10:52.965 }, 00:10:52.965 { 00:10:52.965 "name": "BaseBdev2", 00:10:52.965 "uuid": "81643711-3433-445b-8fa1-726ea267606f", 00:10:52.965 "is_configured": true, 00:10:52.965 "data_offset": 0, 00:10:52.965 "data_size": 65536 00:10:52.965 }, 00:10:52.965 { 00:10:52.965 "name": "BaseBdev3", 00:10:52.965 "uuid": "e7a3b18a-7240-4728-bb82-79e1161deb12", 00:10:52.965 "is_configured": true, 00:10:52.965 "data_offset": 0, 00:10:52.965 "data_size": 65536 00:10:52.965 } 00:10:52.965 ] 00:10:52.965 }' 00:10:52.965 14:36:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.965 14:36:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.223 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.223 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.223 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.223 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.223 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.482 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:53.482 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.482 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:53.482 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.483 [2024-11-04 14:36:52.453457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:53.483 [2024-11-04 14:36:52.453649] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:53.483 [2024-11-04 14:36:52.453720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:53.483 [2024-11-04 14:36:52.454180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:53.483 [2024-11-04 14:36:52.454513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:53.483 [2024-11-04 14:36:52.454537] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:53.483 [2024-11-04 14:36:52.454827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.483 NewBaseBdev 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.483 [ 00:10:53.483 { 00:10:53.483 "name": "NewBaseBdev", 00:10:53.483 "aliases": [ 00:10:53.483 "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019" 00:10:53.483 ], 00:10:53.483 "product_name": "Malloc disk", 00:10:53.483 "block_size": 512, 00:10:53.483 "num_blocks": 65536, 00:10:53.483 "uuid": "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019", 00:10:53.483 "assigned_rate_limits": { 00:10:53.483 "rw_ios_per_sec": 0, 00:10:53.483 "rw_mbytes_per_sec": 0, 00:10:53.483 "r_mbytes_per_sec": 0, 00:10:53.483 "w_mbytes_per_sec": 0 00:10:53.483 }, 00:10:53.483 "claimed": true, 00:10:53.483 "claim_type": "exclusive_write", 00:10:53.483 "zoned": false, 00:10:53.483 "supported_io_types": { 00:10:53.483 "read": true, 00:10:53.483 "write": true, 00:10:53.483 "unmap": true, 00:10:53.483 "flush": true, 00:10:53.483 "reset": true, 00:10:53.483 "nvme_admin": false, 00:10:53.483 "nvme_io": false, 00:10:53.483 "nvme_io_md": false, 00:10:53.483 "write_zeroes": true, 00:10:53.483 "zcopy": true, 00:10:53.483 "get_zone_info": false, 00:10:53.483 "zone_management": false, 00:10:53.483 "zone_append": false, 00:10:53.483 "compare": false, 00:10:53.483 "compare_and_write": false, 00:10:53.483 "abort": true, 00:10:53.483 "seek_hole": false, 00:10:53.483 "seek_data": false, 00:10:53.483 "copy": true, 00:10:53.483 "nvme_iov_md": false 00:10:53.483 }, 00:10:53.483 "memory_domains": [ 00:10:53.483 { 00:10:53.483 "dma_device_id": "system", 00:10:53.483 "dma_device_type": 1 00:10:53.483 }, 00:10:53.483 { 00:10:53.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.483 "dma_device_type": 2 00:10:53.483 } 00:10:53.483 ], 00:10:53.483 "driver_specific": {} 00:10:53.483 } 00:10:53.483 ] 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.483 "name": "Existed_Raid", 00:10:53.483 "uuid": "90414566-30c1-445e-ab93-c517ff34c67f", 00:10:53.483 "strip_size_kb": 64, 00:10:53.483 "state": "online", 00:10:53.483 "raid_level": "raid0", 00:10:53.483 "superblock": false, 00:10:53.483 "num_base_bdevs": 3, 00:10:53.483 "num_base_bdevs_discovered": 3, 00:10:53.483 "num_base_bdevs_operational": 3, 00:10:53.483 "base_bdevs_list": [ 00:10:53.483 { 00:10:53.483 "name": "NewBaseBdev", 00:10:53.483 "uuid": "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019", 00:10:53.483 "is_configured": true, 00:10:53.483 "data_offset": 0, 00:10:53.483 "data_size": 65536 00:10:53.483 }, 00:10:53.483 { 00:10:53.483 "name": "BaseBdev2", 00:10:53.483 "uuid": "81643711-3433-445b-8fa1-726ea267606f", 00:10:53.483 "is_configured": true, 00:10:53.483 "data_offset": 0, 00:10:53.483 "data_size": 65536 00:10:53.483 }, 00:10:53.483 { 00:10:53.483 "name": "BaseBdev3", 00:10:53.483 "uuid": "e7a3b18a-7240-4728-bb82-79e1161deb12", 00:10:53.483 "is_configured": true, 00:10:53.483 "data_offset": 0, 00:10:53.483 "data_size": 65536 00:10:53.483 } 00:10:53.483 ] 00:10:53.483 }' 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.483 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.050 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:54.050 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:54.050 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:54.050 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:54.050 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:54.050 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:54.050 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:54.050 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.050 14:36:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.050 14:36:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:54.050 [2024-11-04 14:36:52.998089] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:54.050 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.050 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:54.050 "name": "Existed_Raid", 00:10:54.050 "aliases": [ 00:10:54.050 "90414566-30c1-445e-ab93-c517ff34c67f" 00:10:54.050 ], 00:10:54.050 "product_name": "Raid Volume", 00:10:54.050 "block_size": 512, 00:10:54.050 "num_blocks": 196608, 00:10:54.050 "uuid": "90414566-30c1-445e-ab93-c517ff34c67f", 00:10:54.050 "assigned_rate_limits": { 00:10:54.050 "rw_ios_per_sec": 0, 00:10:54.050 "rw_mbytes_per_sec": 0, 00:10:54.050 "r_mbytes_per_sec": 0, 00:10:54.050 "w_mbytes_per_sec": 0 00:10:54.050 }, 00:10:54.050 "claimed": false, 00:10:54.050 "zoned": false, 00:10:54.050 "supported_io_types": { 00:10:54.050 "read": true, 00:10:54.050 "write": true, 00:10:54.050 "unmap": true, 00:10:54.050 "flush": true, 00:10:54.050 "reset": true, 00:10:54.050 "nvme_admin": false, 00:10:54.050 "nvme_io": false, 00:10:54.050 "nvme_io_md": false, 00:10:54.050 "write_zeroes": true, 00:10:54.050 "zcopy": false, 00:10:54.050 "get_zone_info": false, 00:10:54.050 "zone_management": false, 00:10:54.050 "zone_append": false, 00:10:54.050 "compare": false, 00:10:54.050 "compare_and_write": false, 00:10:54.050 "abort": false, 00:10:54.050 "seek_hole": false, 00:10:54.050 "seek_data": false, 00:10:54.050 "copy": false, 00:10:54.050 "nvme_iov_md": false 00:10:54.050 }, 00:10:54.050 "memory_domains": [ 00:10:54.050 { 00:10:54.050 "dma_device_id": "system", 00:10:54.050 "dma_device_type": 1 00:10:54.050 }, 00:10:54.050 { 00:10:54.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.050 "dma_device_type": 2 00:10:54.050 }, 00:10:54.050 { 00:10:54.050 "dma_device_id": "system", 00:10:54.050 "dma_device_type": 1 00:10:54.050 }, 00:10:54.050 { 00:10:54.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.050 "dma_device_type": 2 00:10:54.050 }, 00:10:54.050 { 00:10:54.050 "dma_device_id": "system", 00:10:54.050 "dma_device_type": 1 00:10:54.050 }, 00:10:54.050 { 00:10:54.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.050 "dma_device_type": 2 00:10:54.050 } 00:10:54.050 ], 00:10:54.050 "driver_specific": { 00:10:54.050 "raid": { 00:10:54.050 "uuid": "90414566-30c1-445e-ab93-c517ff34c67f", 00:10:54.050 "strip_size_kb": 64, 00:10:54.050 "state": "online", 00:10:54.050 "raid_level": "raid0", 00:10:54.050 "superblock": false, 00:10:54.050 "num_base_bdevs": 3, 00:10:54.050 "num_base_bdevs_discovered": 3, 00:10:54.050 "num_base_bdevs_operational": 3, 00:10:54.050 "base_bdevs_list": [ 00:10:54.050 { 00:10:54.050 "name": "NewBaseBdev", 00:10:54.050 "uuid": "bdac2f3e-95cd-4ff2-b0cb-a2d9c8ea1019", 00:10:54.050 "is_configured": true, 00:10:54.050 "data_offset": 0, 00:10:54.050 "data_size": 65536 00:10:54.050 }, 00:10:54.050 { 00:10:54.050 "name": "BaseBdev2", 00:10:54.050 "uuid": "81643711-3433-445b-8fa1-726ea267606f", 00:10:54.050 "is_configured": true, 00:10:54.050 "data_offset": 0, 00:10:54.050 "data_size": 65536 00:10:54.050 }, 00:10:54.050 { 00:10:54.050 "name": "BaseBdev3", 00:10:54.050 "uuid": "e7a3b18a-7240-4728-bb82-79e1161deb12", 00:10:54.050 "is_configured": true, 00:10:54.050 "data_offset": 0, 00:10:54.050 "data_size": 65536 00:10:54.050 } 00:10:54.050 ] 00:10:54.050 } 00:10:54.050 } 00:10:54.050 }' 00:10:54.050 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:54.050 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:54.050 BaseBdev2 00:10:54.050 BaseBdev3' 00:10:54.050 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.050 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:54.050 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.050 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:54.050 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.051 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.051 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.051 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.309 [2024-11-04 14:36:53.305795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:54.309 [2024-11-04 14:36:53.305828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.309 [2024-11-04 14:36:53.305921] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.309 [2024-11-04 14:36:53.306065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.309 [2024-11-04 14:36:53.306087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63806 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63806 ']' 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63806 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63806 00:10:54.309 killing process with pid 63806 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63806' 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63806 00:10:54.309 [2024-11-04 14:36:53.347641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.309 14:36:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63806 00:10:54.567 [2024-11-04 14:36:53.618043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.528 14:36:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:55.528 00:10:55.528 real 0m11.625s 00:10:55.528 user 0m19.396s 00:10:55.528 sys 0m1.544s 00:10:55.528 14:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.528 ************************************ 00:10:55.528 END TEST raid_state_function_test 00:10:55.528 ************************************ 00:10:55.528 14:36:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.528 14:36:54 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:55.528 14:36:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:55.528 14:36:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.528 14:36:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.528 ************************************ 00:10:55.528 START TEST raid_state_function_test_sb 00:10:55.528 ************************************ 00:10:55.528 14:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:10:55.528 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:55.528 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:55.528 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:55.528 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64444 00:10:55.786 Process raid pid: 64444 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64444' 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64444 00:10:55.786 14:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64444 ']' 00:10:55.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.787 14:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.787 14:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:55.787 14:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.787 14:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:55.787 14:36:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.787 [2024-11-04 14:36:54.792902] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:10:55.787 [2024-11-04 14:36:54.793896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.045 [2024-11-04 14:36:54.984688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.045 [2024-11-04 14:36:55.113472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.303 [2024-11-04 14:36:55.320288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.303 [2024-11-04 14:36:55.320348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.870 14:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:56.870 14:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:56.870 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:56.870 14:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.870 14:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.870 [2024-11-04 14:36:55.832471] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:56.870 [2024-11-04 14:36:55.832582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:56.870 [2024-11-04 14:36:55.832598] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:56.870 [2024-11-04 14:36:55.832613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:56.870 [2024-11-04 14:36:55.832623] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:56.871 [2024-11-04 14:36:55.832635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.871 "name": "Existed_Raid", 00:10:56.871 "uuid": "3e285f43-cedf-42d1-9161-3c34a90c36f8", 00:10:56.871 "strip_size_kb": 64, 00:10:56.871 "state": "configuring", 00:10:56.871 "raid_level": "raid0", 00:10:56.871 "superblock": true, 00:10:56.871 "num_base_bdevs": 3, 00:10:56.871 "num_base_bdevs_discovered": 0, 00:10:56.871 "num_base_bdevs_operational": 3, 00:10:56.871 "base_bdevs_list": [ 00:10:56.871 { 00:10:56.871 "name": "BaseBdev1", 00:10:56.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.871 "is_configured": false, 00:10:56.871 "data_offset": 0, 00:10:56.871 "data_size": 0 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "name": "BaseBdev2", 00:10:56.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.871 "is_configured": false, 00:10:56.871 "data_offset": 0, 00:10:56.871 "data_size": 0 00:10:56.871 }, 00:10:56.871 { 00:10:56.871 "name": "BaseBdev3", 00:10:56.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.871 "is_configured": false, 00:10:56.871 "data_offset": 0, 00:10:56.871 "data_size": 0 00:10:56.871 } 00:10:56.871 ] 00:10:56.871 }' 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.871 14:36:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.439 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:57.439 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.439 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.439 [2024-11-04 14:36:56.352557] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:57.439 [2024-11-04 14:36:56.352599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:57.439 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.439 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:57.439 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.439 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.439 [2024-11-04 14:36:56.360564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:57.439 [2024-11-04 14:36:56.360630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:57.439 [2024-11-04 14:36:56.360645] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:57.439 [2024-11-04 14:36:56.360660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:57.439 [2024-11-04 14:36:56.360686] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:57.439 [2024-11-04 14:36:56.360699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:57.439 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.440 [2024-11-04 14:36:56.406312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.440 BaseBdev1 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.440 [ 00:10:57.440 { 00:10:57.440 "name": "BaseBdev1", 00:10:57.440 "aliases": [ 00:10:57.440 "74ff93c1-93fc-44cc-80cf-13754c91b5ea" 00:10:57.440 ], 00:10:57.440 "product_name": "Malloc disk", 00:10:57.440 "block_size": 512, 00:10:57.440 "num_blocks": 65536, 00:10:57.440 "uuid": "74ff93c1-93fc-44cc-80cf-13754c91b5ea", 00:10:57.440 "assigned_rate_limits": { 00:10:57.440 "rw_ios_per_sec": 0, 00:10:57.440 "rw_mbytes_per_sec": 0, 00:10:57.440 "r_mbytes_per_sec": 0, 00:10:57.440 "w_mbytes_per_sec": 0 00:10:57.440 }, 00:10:57.440 "claimed": true, 00:10:57.440 "claim_type": "exclusive_write", 00:10:57.440 "zoned": false, 00:10:57.440 "supported_io_types": { 00:10:57.440 "read": true, 00:10:57.440 "write": true, 00:10:57.440 "unmap": true, 00:10:57.440 "flush": true, 00:10:57.440 "reset": true, 00:10:57.440 "nvme_admin": false, 00:10:57.440 "nvme_io": false, 00:10:57.440 "nvme_io_md": false, 00:10:57.440 "write_zeroes": true, 00:10:57.440 "zcopy": true, 00:10:57.440 "get_zone_info": false, 00:10:57.440 "zone_management": false, 00:10:57.440 "zone_append": false, 00:10:57.440 "compare": false, 00:10:57.440 "compare_and_write": false, 00:10:57.440 "abort": true, 00:10:57.440 "seek_hole": false, 00:10:57.440 "seek_data": false, 00:10:57.440 "copy": true, 00:10:57.440 "nvme_iov_md": false 00:10:57.440 }, 00:10:57.440 "memory_domains": [ 00:10:57.440 { 00:10:57.440 "dma_device_id": "system", 00:10:57.440 "dma_device_type": 1 00:10:57.440 }, 00:10:57.440 { 00:10:57.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.440 "dma_device_type": 2 00:10:57.440 } 00:10:57.440 ], 00:10:57.440 "driver_specific": {} 00:10:57.440 } 00:10:57.440 ] 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.440 "name": "Existed_Raid", 00:10:57.440 "uuid": "c0847b0b-f6d8-44b8-9b44-62ed5bfbc902", 00:10:57.440 "strip_size_kb": 64, 00:10:57.440 "state": "configuring", 00:10:57.440 "raid_level": "raid0", 00:10:57.440 "superblock": true, 00:10:57.440 "num_base_bdevs": 3, 00:10:57.440 "num_base_bdevs_discovered": 1, 00:10:57.440 "num_base_bdevs_operational": 3, 00:10:57.440 "base_bdevs_list": [ 00:10:57.440 { 00:10:57.440 "name": "BaseBdev1", 00:10:57.440 "uuid": "74ff93c1-93fc-44cc-80cf-13754c91b5ea", 00:10:57.440 "is_configured": true, 00:10:57.440 "data_offset": 2048, 00:10:57.440 "data_size": 63488 00:10:57.440 }, 00:10:57.440 { 00:10:57.440 "name": "BaseBdev2", 00:10:57.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.440 "is_configured": false, 00:10:57.440 "data_offset": 0, 00:10:57.440 "data_size": 0 00:10:57.440 }, 00:10:57.440 { 00:10:57.440 "name": "BaseBdev3", 00:10:57.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.440 "is_configured": false, 00:10:57.440 "data_offset": 0, 00:10:57.440 "data_size": 0 00:10:57.440 } 00:10:57.440 ] 00:10:57.440 }' 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.440 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.007 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:58.007 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.007 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.007 [2024-11-04 14:36:56.950597] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:58.007 [2024-11-04 14:36:56.950655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:58.007 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.007 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:58.007 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.007 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.007 [2024-11-04 14:36:56.958663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.007 [2024-11-04 14:36:56.961245] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:58.007 [2024-11-04 14:36:56.961407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:58.007 [2024-11-04 14:36:56.961527] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:58.007 [2024-11-04 14:36:56.961666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:58.007 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.007 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.008 14:36:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.008 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.008 "name": "Existed_Raid", 00:10:58.008 "uuid": "9c7127de-3f3a-4dba-89ff-315b422b0005", 00:10:58.008 "strip_size_kb": 64, 00:10:58.008 "state": "configuring", 00:10:58.008 "raid_level": "raid0", 00:10:58.008 "superblock": true, 00:10:58.008 "num_base_bdevs": 3, 00:10:58.008 "num_base_bdevs_discovered": 1, 00:10:58.008 "num_base_bdevs_operational": 3, 00:10:58.008 "base_bdevs_list": [ 00:10:58.008 { 00:10:58.008 "name": "BaseBdev1", 00:10:58.008 "uuid": "74ff93c1-93fc-44cc-80cf-13754c91b5ea", 00:10:58.008 "is_configured": true, 00:10:58.008 "data_offset": 2048, 00:10:58.008 "data_size": 63488 00:10:58.008 }, 00:10:58.008 { 00:10:58.008 "name": "BaseBdev2", 00:10:58.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.008 "is_configured": false, 00:10:58.008 "data_offset": 0, 00:10:58.008 "data_size": 0 00:10:58.008 }, 00:10:58.008 { 00:10:58.008 "name": "BaseBdev3", 00:10:58.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.008 "is_configured": false, 00:10:58.008 "data_offset": 0, 00:10:58.008 "data_size": 0 00:10:58.008 } 00:10:58.008 ] 00:10:58.008 }' 00:10:58.008 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.008 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.579 [2024-11-04 14:36:57.520220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.579 BaseBdev2 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.579 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.579 [ 00:10:58.579 { 00:10:58.579 "name": "BaseBdev2", 00:10:58.579 "aliases": [ 00:10:58.579 "65c1f319-7edd-4caa-81c2-4f32405b7930" 00:10:58.579 ], 00:10:58.579 "product_name": "Malloc disk", 00:10:58.579 "block_size": 512, 00:10:58.579 "num_blocks": 65536, 00:10:58.579 "uuid": "65c1f319-7edd-4caa-81c2-4f32405b7930", 00:10:58.579 "assigned_rate_limits": { 00:10:58.579 "rw_ios_per_sec": 0, 00:10:58.579 "rw_mbytes_per_sec": 0, 00:10:58.579 "r_mbytes_per_sec": 0, 00:10:58.579 "w_mbytes_per_sec": 0 00:10:58.579 }, 00:10:58.579 "claimed": true, 00:10:58.579 "claim_type": "exclusive_write", 00:10:58.579 "zoned": false, 00:10:58.579 "supported_io_types": { 00:10:58.579 "read": true, 00:10:58.580 "write": true, 00:10:58.580 "unmap": true, 00:10:58.580 "flush": true, 00:10:58.580 "reset": true, 00:10:58.580 "nvme_admin": false, 00:10:58.580 "nvme_io": false, 00:10:58.580 "nvme_io_md": false, 00:10:58.580 "write_zeroes": true, 00:10:58.580 "zcopy": true, 00:10:58.580 "get_zone_info": false, 00:10:58.580 "zone_management": false, 00:10:58.580 "zone_append": false, 00:10:58.580 "compare": false, 00:10:58.580 "compare_and_write": false, 00:10:58.580 "abort": true, 00:10:58.580 "seek_hole": false, 00:10:58.580 "seek_data": false, 00:10:58.580 "copy": true, 00:10:58.580 "nvme_iov_md": false 00:10:58.580 }, 00:10:58.580 "memory_domains": [ 00:10:58.580 { 00:10:58.580 "dma_device_id": "system", 00:10:58.580 "dma_device_type": 1 00:10:58.580 }, 00:10:58.580 { 00:10:58.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.580 "dma_device_type": 2 00:10:58.580 } 00:10:58.580 ], 00:10:58.580 "driver_specific": {} 00:10:58.580 } 00:10:58.580 ] 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.580 "name": "Existed_Raid", 00:10:58.580 "uuid": "9c7127de-3f3a-4dba-89ff-315b422b0005", 00:10:58.580 "strip_size_kb": 64, 00:10:58.580 "state": "configuring", 00:10:58.580 "raid_level": "raid0", 00:10:58.580 "superblock": true, 00:10:58.580 "num_base_bdevs": 3, 00:10:58.580 "num_base_bdevs_discovered": 2, 00:10:58.580 "num_base_bdevs_operational": 3, 00:10:58.580 "base_bdevs_list": [ 00:10:58.580 { 00:10:58.580 "name": "BaseBdev1", 00:10:58.580 "uuid": "74ff93c1-93fc-44cc-80cf-13754c91b5ea", 00:10:58.580 "is_configured": true, 00:10:58.580 "data_offset": 2048, 00:10:58.580 "data_size": 63488 00:10:58.580 }, 00:10:58.580 { 00:10:58.580 "name": "BaseBdev2", 00:10:58.580 "uuid": "65c1f319-7edd-4caa-81c2-4f32405b7930", 00:10:58.580 "is_configured": true, 00:10:58.580 "data_offset": 2048, 00:10:58.580 "data_size": 63488 00:10:58.580 }, 00:10:58.580 { 00:10:58.580 "name": "BaseBdev3", 00:10:58.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.580 "is_configured": false, 00:10:58.580 "data_offset": 0, 00:10:58.580 "data_size": 0 00:10:58.580 } 00:10:58.580 ] 00:10:58.580 }' 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.580 14:36:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.151 [2024-11-04 14:36:58.095285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:59.151 [2024-11-04 14:36:58.095602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:59.151 [2024-11-04 14:36:58.095632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:59.151 BaseBdev3 00:10:59.151 [2024-11-04 14:36:58.095984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:59.151 [2024-11-04 14:36:58.096207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:59.151 [2024-11-04 14:36:58.096230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:59.151 [2024-11-04 14:36:58.096408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.151 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.151 [ 00:10:59.151 { 00:10:59.151 "name": "BaseBdev3", 00:10:59.151 "aliases": [ 00:10:59.151 "f5e89e17-713c-47b2-ac3e-b4f47e6bbd70" 00:10:59.151 ], 00:10:59.151 "product_name": "Malloc disk", 00:10:59.151 "block_size": 512, 00:10:59.151 "num_blocks": 65536, 00:10:59.151 "uuid": "f5e89e17-713c-47b2-ac3e-b4f47e6bbd70", 00:10:59.151 "assigned_rate_limits": { 00:10:59.151 "rw_ios_per_sec": 0, 00:10:59.151 "rw_mbytes_per_sec": 0, 00:10:59.151 "r_mbytes_per_sec": 0, 00:10:59.152 "w_mbytes_per_sec": 0 00:10:59.152 }, 00:10:59.152 "claimed": true, 00:10:59.152 "claim_type": "exclusive_write", 00:10:59.152 "zoned": false, 00:10:59.152 "supported_io_types": { 00:10:59.152 "read": true, 00:10:59.152 "write": true, 00:10:59.152 "unmap": true, 00:10:59.152 "flush": true, 00:10:59.152 "reset": true, 00:10:59.152 "nvme_admin": false, 00:10:59.152 "nvme_io": false, 00:10:59.152 "nvme_io_md": false, 00:10:59.152 "write_zeroes": true, 00:10:59.152 "zcopy": true, 00:10:59.152 "get_zone_info": false, 00:10:59.152 "zone_management": false, 00:10:59.152 "zone_append": false, 00:10:59.152 "compare": false, 00:10:59.152 "compare_and_write": false, 00:10:59.152 "abort": true, 00:10:59.152 "seek_hole": false, 00:10:59.152 "seek_data": false, 00:10:59.152 "copy": true, 00:10:59.152 "nvme_iov_md": false 00:10:59.152 }, 00:10:59.152 "memory_domains": [ 00:10:59.152 { 00:10:59.152 "dma_device_id": "system", 00:10:59.152 "dma_device_type": 1 00:10:59.152 }, 00:10:59.152 { 00:10:59.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.152 "dma_device_type": 2 00:10:59.152 } 00:10:59.152 ], 00:10:59.152 "driver_specific": {} 00:10:59.152 } 00:10:59.152 ] 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.152 "name": "Existed_Raid", 00:10:59.152 "uuid": "9c7127de-3f3a-4dba-89ff-315b422b0005", 00:10:59.152 "strip_size_kb": 64, 00:10:59.152 "state": "online", 00:10:59.152 "raid_level": "raid0", 00:10:59.152 "superblock": true, 00:10:59.152 "num_base_bdevs": 3, 00:10:59.152 "num_base_bdevs_discovered": 3, 00:10:59.152 "num_base_bdevs_operational": 3, 00:10:59.152 "base_bdevs_list": [ 00:10:59.152 { 00:10:59.152 "name": "BaseBdev1", 00:10:59.152 "uuid": "74ff93c1-93fc-44cc-80cf-13754c91b5ea", 00:10:59.152 "is_configured": true, 00:10:59.152 "data_offset": 2048, 00:10:59.152 "data_size": 63488 00:10:59.152 }, 00:10:59.152 { 00:10:59.152 "name": "BaseBdev2", 00:10:59.152 "uuid": "65c1f319-7edd-4caa-81c2-4f32405b7930", 00:10:59.152 "is_configured": true, 00:10:59.152 "data_offset": 2048, 00:10:59.152 "data_size": 63488 00:10:59.152 }, 00:10:59.152 { 00:10:59.152 "name": "BaseBdev3", 00:10:59.152 "uuid": "f5e89e17-713c-47b2-ac3e-b4f47e6bbd70", 00:10:59.152 "is_configured": true, 00:10:59.152 "data_offset": 2048, 00:10:59.152 "data_size": 63488 00:10:59.152 } 00:10:59.152 ] 00:10:59.152 }' 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.152 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.723 [2024-11-04 14:36:58.635906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.723 "name": "Existed_Raid", 00:10:59.723 "aliases": [ 00:10:59.723 "9c7127de-3f3a-4dba-89ff-315b422b0005" 00:10:59.723 ], 00:10:59.723 "product_name": "Raid Volume", 00:10:59.723 "block_size": 512, 00:10:59.723 "num_blocks": 190464, 00:10:59.723 "uuid": "9c7127de-3f3a-4dba-89ff-315b422b0005", 00:10:59.723 "assigned_rate_limits": { 00:10:59.723 "rw_ios_per_sec": 0, 00:10:59.723 "rw_mbytes_per_sec": 0, 00:10:59.723 "r_mbytes_per_sec": 0, 00:10:59.723 "w_mbytes_per_sec": 0 00:10:59.723 }, 00:10:59.723 "claimed": false, 00:10:59.723 "zoned": false, 00:10:59.723 "supported_io_types": { 00:10:59.723 "read": true, 00:10:59.723 "write": true, 00:10:59.723 "unmap": true, 00:10:59.723 "flush": true, 00:10:59.723 "reset": true, 00:10:59.723 "nvme_admin": false, 00:10:59.723 "nvme_io": false, 00:10:59.723 "nvme_io_md": false, 00:10:59.723 "write_zeroes": true, 00:10:59.723 "zcopy": false, 00:10:59.723 "get_zone_info": false, 00:10:59.723 "zone_management": false, 00:10:59.723 "zone_append": false, 00:10:59.723 "compare": false, 00:10:59.723 "compare_and_write": false, 00:10:59.723 "abort": false, 00:10:59.723 "seek_hole": false, 00:10:59.723 "seek_data": false, 00:10:59.723 "copy": false, 00:10:59.723 "nvme_iov_md": false 00:10:59.723 }, 00:10:59.723 "memory_domains": [ 00:10:59.723 { 00:10:59.723 "dma_device_id": "system", 00:10:59.723 "dma_device_type": 1 00:10:59.723 }, 00:10:59.723 { 00:10:59.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.723 "dma_device_type": 2 00:10:59.723 }, 00:10:59.723 { 00:10:59.723 "dma_device_id": "system", 00:10:59.723 "dma_device_type": 1 00:10:59.723 }, 00:10:59.723 { 00:10:59.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.723 "dma_device_type": 2 00:10:59.723 }, 00:10:59.723 { 00:10:59.723 "dma_device_id": "system", 00:10:59.723 "dma_device_type": 1 00:10:59.723 }, 00:10:59.723 { 00:10:59.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.723 "dma_device_type": 2 00:10:59.723 } 00:10:59.723 ], 00:10:59.723 "driver_specific": { 00:10:59.723 "raid": { 00:10:59.723 "uuid": "9c7127de-3f3a-4dba-89ff-315b422b0005", 00:10:59.723 "strip_size_kb": 64, 00:10:59.723 "state": "online", 00:10:59.723 "raid_level": "raid0", 00:10:59.723 "superblock": true, 00:10:59.723 "num_base_bdevs": 3, 00:10:59.723 "num_base_bdevs_discovered": 3, 00:10:59.723 "num_base_bdevs_operational": 3, 00:10:59.723 "base_bdevs_list": [ 00:10:59.723 { 00:10:59.723 "name": "BaseBdev1", 00:10:59.723 "uuid": "74ff93c1-93fc-44cc-80cf-13754c91b5ea", 00:10:59.723 "is_configured": true, 00:10:59.723 "data_offset": 2048, 00:10:59.723 "data_size": 63488 00:10:59.723 }, 00:10:59.723 { 00:10:59.723 "name": "BaseBdev2", 00:10:59.723 "uuid": "65c1f319-7edd-4caa-81c2-4f32405b7930", 00:10:59.723 "is_configured": true, 00:10:59.723 "data_offset": 2048, 00:10:59.723 "data_size": 63488 00:10:59.723 }, 00:10:59.723 { 00:10:59.723 "name": "BaseBdev3", 00:10:59.723 "uuid": "f5e89e17-713c-47b2-ac3e-b4f47e6bbd70", 00:10:59.723 "is_configured": true, 00:10:59.723 "data_offset": 2048, 00:10:59.723 "data_size": 63488 00:10:59.723 } 00:10:59.723 ] 00:10:59.723 } 00:10:59.723 } 00:10:59.723 }' 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.723 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:59.723 BaseBdev2 00:10:59.723 BaseBdev3' 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.724 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.982 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.982 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.982 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.982 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.982 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:59.982 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.983 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.983 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.983 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.983 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.983 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.983 14:36:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:59.983 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.983 14:36:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.983 [2024-11-04 14:36:58.943665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:59.983 [2024-11-04 14:36:58.943819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.983 [2024-11-04 14:36:58.943913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.983 "name": "Existed_Raid", 00:10:59.983 "uuid": "9c7127de-3f3a-4dba-89ff-315b422b0005", 00:10:59.983 "strip_size_kb": 64, 00:10:59.983 "state": "offline", 00:10:59.983 "raid_level": "raid0", 00:10:59.983 "superblock": true, 00:10:59.983 "num_base_bdevs": 3, 00:10:59.983 "num_base_bdevs_discovered": 2, 00:10:59.983 "num_base_bdevs_operational": 2, 00:10:59.983 "base_bdevs_list": [ 00:10:59.983 { 00:10:59.983 "name": null, 00:10:59.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.983 "is_configured": false, 00:10:59.983 "data_offset": 0, 00:10:59.983 "data_size": 63488 00:10:59.983 }, 00:10:59.983 { 00:10:59.983 "name": "BaseBdev2", 00:10:59.983 "uuid": "65c1f319-7edd-4caa-81c2-4f32405b7930", 00:10:59.983 "is_configured": true, 00:10:59.983 "data_offset": 2048, 00:10:59.983 "data_size": 63488 00:10:59.983 }, 00:10:59.983 { 00:10:59.983 "name": "BaseBdev3", 00:10:59.983 "uuid": "f5e89e17-713c-47b2-ac3e-b4f47e6bbd70", 00:10:59.983 "is_configured": true, 00:10:59.983 "data_offset": 2048, 00:10:59.983 "data_size": 63488 00:10:59.983 } 00:10:59.983 ] 00:10:59.983 }' 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.983 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.550 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.550 [2024-11-04 14:36:59.607907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:00.808 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.809 [2024-11-04 14:36:59.751692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:00.809 [2024-11-04 14:36:59.751748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.809 BaseBdev2 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.809 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.068 [ 00:11:01.068 { 00:11:01.068 "name": "BaseBdev2", 00:11:01.068 "aliases": [ 00:11:01.068 "d9d6d53a-d087-4862-9513-b3983e80253e" 00:11:01.068 ], 00:11:01.068 "product_name": "Malloc disk", 00:11:01.068 "block_size": 512, 00:11:01.068 "num_blocks": 65536, 00:11:01.068 "uuid": "d9d6d53a-d087-4862-9513-b3983e80253e", 00:11:01.068 "assigned_rate_limits": { 00:11:01.068 "rw_ios_per_sec": 0, 00:11:01.068 "rw_mbytes_per_sec": 0, 00:11:01.068 "r_mbytes_per_sec": 0, 00:11:01.068 "w_mbytes_per_sec": 0 00:11:01.068 }, 00:11:01.068 "claimed": false, 00:11:01.068 "zoned": false, 00:11:01.068 "supported_io_types": { 00:11:01.068 "read": true, 00:11:01.068 "write": true, 00:11:01.068 "unmap": true, 00:11:01.068 "flush": true, 00:11:01.068 "reset": true, 00:11:01.068 "nvme_admin": false, 00:11:01.068 "nvme_io": false, 00:11:01.068 "nvme_io_md": false, 00:11:01.068 "write_zeroes": true, 00:11:01.068 "zcopy": true, 00:11:01.068 "get_zone_info": false, 00:11:01.068 "zone_management": false, 00:11:01.068 "zone_append": false, 00:11:01.068 "compare": false, 00:11:01.068 "compare_and_write": false, 00:11:01.068 "abort": true, 00:11:01.068 "seek_hole": false, 00:11:01.068 "seek_data": false, 00:11:01.068 "copy": true, 00:11:01.068 "nvme_iov_md": false 00:11:01.068 }, 00:11:01.068 "memory_domains": [ 00:11:01.068 { 00:11:01.068 "dma_device_id": "system", 00:11:01.068 "dma_device_type": 1 00:11:01.068 }, 00:11:01.068 { 00:11:01.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.068 "dma_device_type": 2 00:11:01.068 } 00:11:01.068 ], 00:11:01.068 "driver_specific": {} 00:11:01.068 } 00:11:01.068 ] 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.068 BaseBdev3 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:01.068 14:36:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.068 [ 00:11:01.068 { 00:11:01.068 "name": "BaseBdev3", 00:11:01.068 "aliases": [ 00:11:01.068 "283465d7-2290-4783-9537-0908acddd66a" 00:11:01.068 ], 00:11:01.068 "product_name": "Malloc disk", 00:11:01.068 "block_size": 512, 00:11:01.068 "num_blocks": 65536, 00:11:01.068 "uuid": "283465d7-2290-4783-9537-0908acddd66a", 00:11:01.068 "assigned_rate_limits": { 00:11:01.068 "rw_ios_per_sec": 0, 00:11:01.068 "rw_mbytes_per_sec": 0, 00:11:01.068 "r_mbytes_per_sec": 0, 00:11:01.068 "w_mbytes_per_sec": 0 00:11:01.068 }, 00:11:01.068 "claimed": false, 00:11:01.068 "zoned": false, 00:11:01.068 "supported_io_types": { 00:11:01.068 "read": true, 00:11:01.068 "write": true, 00:11:01.068 "unmap": true, 00:11:01.068 "flush": true, 00:11:01.068 "reset": true, 00:11:01.068 "nvme_admin": false, 00:11:01.068 "nvme_io": false, 00:11:01.068 "nvme_io_md": false, 00:11:01.068 "write_zeroes": true, 00:11:01.068 "zcopy": true, 00:11:01.068 "get_zone_info": false, 00:11:01.068 "zone_management": false, 00:11:01.068 "zone_append": false, 00:11:01.068 "compare": false, 00:11:01.068 "compare_and_write": false, 00:11:01.068 "abort": true, 00:11:01.068 "seek_hole": false, 00:11:01.068 "seek_data": false, 00:11:01.068 "copy": true, 00:11:01.068 "nvme_iov_md": false 00:11:01.068 }, 00:11:01.068 "memory_domains": [ 00:11:01.068 { 00:11:01.068 "dma_device_id": "system", 00:11:01.068 "dma_device_type": 1 00:11:01.068 }, 00:11:01.068 { 00:11:01.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.068 "dma_device_type": 2 00:11:01.068 } 00:11:01.068 ], 00:11:01.068 "driver_specific": {} 00:11:01.068 } 00:11:01.068 ] 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.068 [2024-11-04 14:37:00.037380] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.068 [2024-11-04 14:37:00.037433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.068 [2024-11-04 14:37:00.037481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.068 [2024-11-04 14:37:00.040067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.068 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.069 "name": "Existed_Raid", 00:11:01.069 "uuid": "c9df5a15-f550-4ceb-ac06-63eea9784db4", 00:11:01.069 "strip_size_kb": 64, 00:11:01.069 "state": "configuring", 00:11:01.069 "raid_level": "raid0", 00:11:01.069 "superblock": true, 00:11:01.069 "num_base_bdevs": 3, 00:11:01.069 "num_base_bdevs_discovered": 2, 00:11:01.069 "num_base_bdevs_operational": 3, 00:11:01.069 "base_bdevs_list": [ 00:11:01.069 { 00:11:01.069 "name": "BaseBdev1", 00:11:01.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.069 "is_configured": false, 00:11:01.069 "data_offset": 0, 00:11:01.069 "data_size": 0 00:11:01.069 }, 00:11:01.069 { 00:11:01.069 "name": "BaseBdev2", 00:11:01.069 "uuid": "d9d6d53a-d087-4862-9513-b3983e80253e", 00:11:01.069 "is_configured": true, 00:11:01.069 "data_offset": 2048, 00:11:01.069 "data_size": 63488 00:11:01.069 }, 00:11:01.069 { 00:11:01.069 "name": "BaseBdev3", 00:11:01.069 "uuid": "283465d7-2290-4783-9537-0908acddd66a", 00:11:01.069 "is_configured": true, 00:11:01.069 "data_offset": 2048, 00:11:01.069 "data_size": 63488 00:11:01.069 } 00:11:01.069 ] 00:11:01.069 }' 00:11:01.069 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.069 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.634 [2024-11-04 14:37:00.549506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.634 "name": "Existed_Raid", 00:11:01.634 "uuid": "c9df5a15-f550-4ceb-ac06-63eea9784db4", 00:11:01.634 "strip_size_kb": 64, 00:11:01.634 "state": "configuring", 00:11:01.634 "raid_level": "raid0", 00:11:01.634 "superblock": true, 00:11:01.634 "num_base_bdevs": 3, 00:11:01.634 "num_base_bdevs_discovered": 1, 00:11:01.634 "num_base_bdevs_operational": 3, 00:11:01.634 "base_bdevs_list": [ 00:11:01.634 { 00:11:01.634 "name": "BaseBdev1", 00:11:01.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.634 "is_configured": false, 00:11:01.634 "data_offset": 0, 00:11:01.634 "data_size": 0 00:11:01.634 }, 00:11:01.634 { 00:11:01.634 "name": null, 00:11:01.634 "uuid": "d9d6d53a-d087-4862-9513-b3983e80253e", 00:11:01.634 "is_configured": false, 00:11:01.634 "data_offset": 0, 00:11:01.634 "data_size": 63488 00:11:01.634 }, 00:11:01.634 { 00:11:01.634 "name": "BaseBdev3", 00:11:01.634 "uuid": "283465d7-2290-4783-9537-0908acddd66a", 00:11:01.634 "is_configured": true, 00:11:01.634 "data_offset": 2048, 00:11:01.634 "data_size": 63488 00:11:01.634 } 00:11:01.634 ] 00:11:01.634 }' 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.634 14:37:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.200 [2024-11-04 14:37:01.138583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.200 BaseBdev1 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.200 [ 00:11:02.200 { 00:11:02.200 "name": "BaseBdev1", 00:11:02.200 "aliases": [ 00:11:02.200 "88f65121-fc13-4f08-b73d-7c6cb231ff9f" 00:11:02.200 ], 00:11:02.200 "product_name": "Malloc disk", 00:11:02.200 "block_size": 512, 00:11:02.200 "num_blocks": 65536, 00:11:02.200 "uuid": "88f65121-fc13-4f08-b73d-7c6cb231ff9f", 00:11:02.200 "assigned_rate_limits": { 00:11:02.200 "rw_ios_per_sec": 0, 00:11:02.200 "rw_mbytes_per_sec": 0, 00:11:02.200 "r_mbytes_per_sec": 0, 00:11:02.200 "w_mbytes_per_sec": 0 00:11:02.200 }, 00:11:02.200 "claimed": true, 00:11:02.200 "claim_type": "exclusive_write", 00:11:02.200 "zoned": false, 00:11:02.200 "supported_io_types": { 00:11:02.200 "read": true, 00:11:02.200 "write": true, 00:11:02.200 "unmap": true, 00:11:02.200 "flush": true, 00:11:02.200 "reset": true, 00:11:02.200 "nvme_admin": false, 00:11:02.200 "nvme_io": false, 00:11:02.200 "nvme_io_md": false, 00:11:02.200 "write_zeroes": true, 00:11:02.200 "zcopy": true, 00:11:02.200 "get_zone_info": false, 00:11:02.200 "zone_management": false, 00:11:02.200 "zone_append": false, 00:11:02.200 "compare": false, 00:11:02.200 "compare_and_write": false, 00:11:02.200 "abort": true, 00:11:02.200 "seek_hole": false, 00:11:02.200 "seek_data": false, 00:11:02.200 "copy": true, 00:11:02.200 "nvme_iov_md": false 00:11:02.200 }, 00:11:02.200 "memory_domains": [ 00:11:02.200 { 00:11:02.200 "dma_device_id": "system", 00:11:02.200 "dma_device_type": 1 00:11:02.200 }, 00:11:02.200 { 00:11:02.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.200 "dma_device_type": 2 00:11:02.200 } 00:11:02.200 ], 00:11:02.200 "driver_specific": {} 00:11:02.200 } 00:11:02.200 ] 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.200 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.201 "name": "Existed_Raid", 00:11:02.201 "uuid": "c9df5a15-f550-4ceb-ac06-63eea9784db4", 00:11:02.201 "strip_size_kb": 64, 00:11:02.201 "state": "configuring", 00:11:02.201 "raid_level": "raid0", 00:11:02.201 "superblock": true, 00:11:02.201 "num_base_bdevs": 3, 00:11:02.201 "num_base_bdevs_discovered": 2, 00:11:02.201 "num_base_bdevs_operational": 3, 00:11:02.201 "base_bdevs_list": [ 00:11:02.201 { 00:11:02.201 "name": "BaseBdev1", 00:11:02.201 "uuid": "88f65121-fc13-4f08-b73d-7c6cb231ff9f", 00:11:02.201 "is_configured": true, 00:11:02.201 "data_offset": 2048, 00:11:02.201 "data_size": 63488 00:11:02.201 }, 00:11:02.201 { 00:11:02.201 "name": null, 00:11:02.201 "uuid": "d9d6d53a-d087-4862-9513-b3983e80253e", 00:11:02.201 "is_configured": false, 00:11:02.201 "data_offset": 0, 00:11:02.201 "data_size": 63488 00:11:02.201 }, 00:11:02.201 { 00:11:02.201 "name": "BaseBdev3", 00:11:02.201 "uuid": "283465d7-2290-4783-9537-0908acddd66a", 00:11:02.201 "is_configured": true, 00:11:02.201 "data_offset": 2048, 00:11:02.201 "data_size": 63488 00:11:02.201 } 00:11:02.201 ] 00:11:02.201 }' 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.201 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.766 [2024-11-04 14:37:01.726784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.766 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.767 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.767 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.767 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.767 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.767 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.767 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.767 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.767 "name": "Existed_Raid", 00:11:02.767 "uuid": "c9df5a15-f550-4ceb-ac06-63eea9784db4", 00:11:02.767 "strip_size_kb": 64, 00:11:02.767 "state": "configuring", 00:11:02.767 "raid_level": "raid0", 00:11:02.767 "superblock": true, 00:11:02.767 "num_base_bdevs": 3, 00:11:02.767 "num_base_bdevs_discovered": 1, 00:11:02.767 "num_base_bdevs_operational": 3, 00:11:02.767 "base_bdevs_list": [ 00:11:02.767 { 00:11:02.767 "name": "BaseBdev1", 00:11:02.767 "uuid": "88f65121-fc13-4f08-b73d-7c6cb231ff9f", 00:11:02.767 "is_configured": true, 00:11:02.767 "data_offset": 2048, 00:11:02.767 "data_size": 63488 00:11:02.767 }, 00:11:02.767 { 00:11:02.767 "name": null, 00:11:02.767 "uuid": "d9d6d53a-d087-4862-9513-b3983e80253e", 00:11:02.767 "is_configured": false, 00:11:02.767 "data_offset": 0, 00:11:02.767 "data_size": 63488 00:11:02.767 }, 00:11:02.767 { 00:11:02.767 "name": null, 00:11:02.767 "uuid": "283465d7-2290-4783-9537-0908acddd66a", 00:11:02.767 "is_configured": false, 00:11:02.767 "data_offset": 0, 00:11:02.767 "data_size": 63488 00:11:02.767 } 00:11:02.767 ] 00:11:02.767 }' 00:11:02.767 14:37:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.767 14:37:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.333 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:03.333 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.333 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.333 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.333 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.333 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:03.333 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:03.333 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.334 [2024-11-04 14:37:02.314973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.334 "name": "Existed_Raid", 00:11:03.334 "uuid": "c9df5a15-f550-4ceb-ac06-63eea9784db4", 00:11:03.334 "strip_size_kb": 64, 00:11:03.334 "state": "configuring", 00:11:03.334 "raid_level": "raid0", 00:11:03.334 "superblock": true, 00:11:03.334 "num_base_bdevs": 3, 00:11:03.334 "num_base_bdevs_discovered": 2, 00:11:03.334 "num_base_bdevs_operational": 3, 00:11:03.334 "base_bdevs_list": [ 00:11:03.334 { 00:11:03.334 "name": "BaseBdev1", 00:11:03.334 "uuid": "88f65121-fc13-4f08-b73d-7c6cb231ff9f", 00:11:03.334 "is_configured": true, 00:11:03.334 "data_offset": 2048, 00:11:03.334 "data_size": 63488 00:11:03.334 }, 00:11:03.334 { 00:11:03.334 "name": null, 00:11:03.334 "uuid": "d9d6d53a-d087-4862-9513-b3983e80253e", 00:11:03.334 "is_configured": false, 00:11:03.334 "data_offset": 0, 00:11:03.334 "data_size": 63488 00:11:03.334 }, 00:11:03.334 { 00:11:03.334 "name": "BaseBdev3", 00:11:03.334 "uuid": "283465d7-2290-4783-9537-0908acddd66a", 00:11:03.334 "is_configured": true, 00:11:03.334 "data_offset": 2048, 00:11:03.334 "data_size": 63488 00:11:03.334 } 00:11:03.334 ] 00:11:03.334 }' 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.334 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.901 [2024-11-04 14:37:02.883223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.901 14:37:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.901 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.901 "name": "Existed_Raid", 00:11:03.901 "uuid": "c9df5a15-f550-4ceb-ac06-63eea9784db4", 00:11:03.901 "strip_size_kb": 64, 00:11:03.901 "state": "configuring", 00:11:03.901 "raid_level": "raid0", 00:11:03.901 "superblock": true, 00:11:03.901 "num_base_bdevs": 3, 00:11:03.901 "num_base_bdevs_discovered": 1, 00:11:03.901 "num_base_bdevs_operational": 3, 00:11:03.901 "base_bdevs_list": [ 00:11:03.901 { 00:11:03.901 "name": null, 00:11:03.901 "uuid": "88f65121-fc13-4f08-b73d-7c6cb231ff9f", 00:11:03.901 "is_configured": false, 00:11:03.901 "data_offset": 0, 00:11:03.901 "data_size": 63488 00:11:03.901 }, 00:11:03.901 { 00:11:03.901 "name": null, 00:11:03.901 "uuid": "d9d6d53a-d087-4862-9513-b3983e80253e", 00:11:03.901 "is_configured": false, 00:11:03.901 "data_offset": 0, 00:11:03.901 "data_size": 63488 00:11:03.901 }, 00:11:03.901 { 00:11:03.901 "name": "BaseBdev3", 00:11:03.901 "uuid": "283465d7-2290-4783-9537-0908acddd66a", 00:11:03.901 "is_configured": true, 00:11:03.901 "data_offset": 2048, 00:11:03.901 "data_size": 63488 00:11:03.901 } 00:11:03.901 ] 00:11:03.901 }' 00:11:03.901 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.901 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.468 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.468 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:04.468 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.468 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.468 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.468 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:04.468 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:04.468 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.469 [2024-11-04 14:37:03.553854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.469 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.727 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.727 "name": "Existed_Raid", 00:11:04.727 "uuid": "c9df5a15-f550-4ceb-ac06-63eea9784db4", 00:11:04.727 "strip_size_kb": 64, 00:11:04.727 "state": "configuring", 00:11:04.727 "raid_level": "raid0", 00:11:04.727 "superblock": true, 00:11:04.727 "num_base_bdevs": 3, 00:11:04.727 "num_base_bdevs_discovered": 2, 00:11:04.727 "num_base_bdevs_operational": 3, 00:11:04.727 "base_bdevs_list": [ 00:11:04.727 { 00:11:04.727 "name": null, 00:11:04.727 "uuid": "88f65121-fc13-4f08-b73d-7c6cb231ff9f", 00:11:04.727 "is_configured": false, 00:11:04.727 "data_offset": 0, 00:11:04.727 "data_size": 63488 00:11:04.727 }, 00:11:04.727 { 00:11:04.727 "name": "BaseBdev2", 00:11:04.727 "uuid": "d9d6d53a-d087-4862-9513-b3983e80253e", 00:11:04.727 "is_configured": true, 00:11:04.727 "data_offset": 2048, 00:11:04.727 "data_size": 63488 00:11:04.727 }, 00:11:04.727 { 00:11:04.727 "name": "BaseBdev3", 00:11:04.727 "uuid": "283465d7-2290-4783-9537-0908acddd66a", 00:11:04.727 "is_configured": true, 00:11:04.727 "data_offset": 2048, 00:11:04.727 "data_size": 63488 00:11:04.727 } 00:11:04.727 ] 00:11:04.727 }' 00:11:04.727 14:37:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.727 14:37:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.985 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:04.985 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.985 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.985 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 88f65121-fc13-4f08-b73d-7c6cb231ff9f 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.244 [2024-11-04 14:37:04.229536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:05.244 [2024-11-04 14:37:04.229780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:05.244 [2024-11-04 14:37:04.229802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:05.244 NewBaseBdev 00:11:05.244 [2024-11-04 14:37:04.230168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:05.244 [2024-11-04 14:37:04.230361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:05.244 [2024-11-04 14:37:04.230510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:05.244 [2024-11-04 14:37:04.230714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.244 [ 00:11:05.244 { 00:11:05.244 "name": "NewBaseBdev", 00:11:05.244 "aliases": [ 00:11:05.244 "88f65121-fc13-4f08-b73d-7c6cb231ff9f" 00:11:05.244 ], 00:11:05.244 "product_name": "Malloc disk", 00:11:05.244 "block_size": 512, 00:11:05.244 "num_blocks": 65536, 00:11:05.244 "uuid": "88f65121-fc13-4f08-b73d-7c6cb231ff9f", 00:11:05.244 "assigned_rate_limits": { 00:11:05.244 "rw_ios_per_sec": 0, 00:11:05.244 "rw_mbytes_per_sec": 0, 00:11:05.244 "r_mbytes_per_sec": 0, 00:11:05.244 "w_mbytes_per_sec": 0 00:11:05.244 }, 00:11:05.244 "claimed": true, 00:11:05.244 "claim_type": "exclusive_write", 00:11:05.244 "zoned": false, 00:11:05.244 "supported_io_types": { 00:11:05.244 "read": true, 00:11:05.244 "write": true, 00:11:05.244 "unmap": true, 00:11:05.244 "flush": true, 00:11:05.244 "reset": true, 00:11:05.244 "nvme_admin": false, 00:11:05.244 "nvme_io": false, 00:11:05.244 "nvme_io_md": false, 00:11:05.244 "write_zeroes": true, 00:11:05.244 "zcopy": true, 00:11:05.244 "get_zone_info": false, 00:11:05.244 "zone_management": false, 00:11:05.244 "zone_append": false, 00:11:05.244 "compare": false, 00:11:05.244 "compare_and_write": false, 00:11:05.244 "abort": true, 00:11:05.244 "seek_hole": false, 00:11:05.244 "seek_data": false, 00:11:05.244 "copy": true, 00:11:05.244 "nvme_iov_md": false 00:11:05.244 }, 00:11:05.244 "memory_domains": [ 00:11:05.244 { 00:11:05.244 "dma_device_id": "system", 00:11:05.244 "dma_device_type": 1 00:11:05.244 }, 00:11:05.244 { 00:11:05.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.244 "dma_device_type": 2 00:11:05.244 } 00:11:05.244 ], 00:11:05.244 "driver_specific": {} 00:11:05.244 } 00:11:05.244 ] 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.244 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.245 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.245 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.245 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.245 "name": "Existed_Raid", 00:11:05.245 "uuid": "c9df5a15-f550-4ceb-ac06-63eea9784db4", 00:11:05.245 "strip_size_kb": 64, 00:11:05.245 "state": "online", 00:11:05.245 "raid_level": "raid0", 00:11:05.245 "superblock": true, 00:11:05.245 "num_base_bdevs": 3, 00:11:05.245 "num_base_bdevs_discovered": 3, 00:11:05.245 "num_base_bdevs_operational": 3, 00:11:05.245 "base_bdevs_list": [ 00:11:05.245 { 00:11:05.245 "name": "NewBaseBdev", 00:11:05.245 "uuid": "88f65121-fc13-4f08-b73d-7c6cb231ff9f", 00:11:05.245 "is_configured": true, 00:11:05.245 "data_offset": 2048, 00:11:05.245 "data_size": 63488 00:11:05.245 }, 00:11:05.245 { 00:11:05.245 "name": "BaseBdev2", 00:11:05.245 "uuid": "d9d6d53a-d087-4862-9513-b3983e80253e", 00:11:05.245 "is_configured": true, 00:11:05.245 "data_offset": 2048, 00:11:05.245 "data_size": 63488 00:11:05.245 }, 00:11:05.245 { 00:11:05.245 "name": "BaseBdev3", 00:11:05.245 "uuid": "283465d7-2290-4783-9537-0908acddd66a", 00:11:05.245 "is_configured": true, 00:11:05.245 "data_offset": 2048, 00:11:05.245 "data_size": 63488 00:11:05.245 } 00:11:05.245 ] 00:11:05.245 }' 00:11:05.245 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.245 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.812 [2024-11-04 14:37:04.814193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.812 "name": "Existed_Raid", 00:11:05.812 "aliases": [ 00:11:05.812 "c9df5a15-f550-4ceb-ac06-63eea9784db4" 00:11:05.812 ], 00:11:05.812 "product_name": "Raid Volume", 00:11:05.812 "block_size": 512, 00:11:05.812 "num_blocks": 190464, 00:11:05.812 "uuid": "c9df5a15-f550-4ceb-ac06-63eea9784db4", 00:11:05.812 "assigned_rate_limits": { 00:11:05.812 "rw_ios_per_sec": 0, 00:11:05.812 "rw_mbytes_per_sec": 0, 00:11:05.812 "r_mbytes_per_sec": 0, 00:11:05.812 "w_mbytes_per_sec": 0 00:11:05.812 }, 00:11:05.812 "claimed": false, 00:11:05.812 "zoned": false, 00:11:05.812 "supported_io_types": { 00:11:05.812 "read": true, 00:11:05.812 "write": true, 00:11:05.812 "unmap": true, 00:11:05.812 "flush": true, 00:11:05.812 "reset": true, 00:11:05.812 "nvme_admin": false, 00:11:05.812 "nvme_io": false, 00:11:05.812 "nvme_io_md": false, 00:11:05.812 "write_zeroes": true, 00:11:05.812 "zcopy": false, 00:11:05.812 "get_zone_info": false, 00:11:05.812 "zone_management": false, 00:11:05.812 "zone_append": false, 00:11:05.812 "compare": false, 00:11:05.812 "compare_and_write": false, 00:11:05.812 "abort": false, 00:11:05.812 "seek_hole": false, 00:11:05.812 "seek_data": false, 00:11:05.812 "copy": false, 00:11:05.812 "nvme_iov_md": false 00:11:05.812 }, 00:11:05.812 "memory_domains": [ 00:11:05.812 { 00:11:05.812 "dma_device_id": "system", 00:11:05.812 "dma_device_type": 1 00:11:05.812 }, 00:11:05.812 { 00:11:05.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.812 "dma_device_type": 2 00:11:05.812 }, 00:11:05.812 { 00:11:05.812 "dma_device_id": "system", 00:11:05.812 "dma_device_type": 1 00:11:05.812 }, 00:11:05.812 { 00:11:05.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.812 "dma_device_type": 2 00:11:05.812 }, 00:11:05.812 { 00:11:05.812 "dma_device_id": "system", 00:11:05.812 "dma_device_type": 1 00:11:05.812 }, 00:11:05.812 { 00:11:05.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.812 "dma_device_type": 2 00:11:05.812 } 00:11:05.812 ], 00:11:05.812 "driver_specific": { 00:11:05.812 "raid": { 00:11:05.812 "uuid": "c9df5a15-f550-4ceb-ac06-63eea9784db4", 00:11:05.812 "strip_size_kb": 64, 00:11:05.812 "state": "online", 00:11:05.812 "raid_level": "raid0", 00:11:05.812 "superblock": true, 00:11:05.812 "num_base_bdevs": 3, 00:11:05.812 "num_base_bdevs_discovered": 3, 00:11:05.812 "num_base_bdevs_operational": 3, 00:11:05.812 "base_bdevs_list": [ 00:11:05.812 { 00:11:05.812 "name": "NewBaseBdev", 00:11:05.812 "uuid": "88f65121-fc13-4f08-b73d-7c6cb231ff9f", 00:11:05.812 "is_configured": true, 00:11:05.812 "data_offset": 2048, 00:11:05.812 "data_size": 63488 00:11:05.812 }, 00:11:05.812 { 00:11:05.812 "name": "BaseBdev2", 00:11:05.812 "uuid": "d9d6d53a-d087-4862-9513-b3983e80253e", 00:11:05.812 "is_configured": true, 00:11:05.812 "data_offset": 2048, 00:11:05.812 "data_size": 63488 00:11:05.812 }, 00:11:05.812 { 00:11:05.812 "name": "BaseBdev3", 00:11:05.812 "uuid": "283465d7-2290-4783-9537-0908acddd66a", 00:11:05.812 "is_configured": true, 00:11:05.812 "data_offset": 2048, 00:11:05.812 "data_size": 63488 00:11:05.812 } 00:11:05.812 ] 00:11:05.812 } 00:11:05.812 } 00:11:05.812 }' 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:05.812 BaseBdev2 00:11:05.812 BaseBdev3' 00:11:05.812 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.071 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:06.071 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.071 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:06.071 14:37:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.071 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.071 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 14:37:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.071 [2024-11-04 14:37:05.121803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:06.071 [2024-11-04 14:37:05.121834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.071 [2024-11-04 14:37:05.121919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.071 [2024-11-04 14:37:05.122047] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.071 [2024-11-04 14:37:05.122070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64444 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64444 ']' 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64444 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64444 00:11:06.071 killing process with pid 64444 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64444' 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64444 00:11:06.071 [2024-11-04 14:37:05.161839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.071 14:37:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64444 00:11:06.330 [2024-11-04 14:37:05.417345] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.734 14:37:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:07.734 00:11:07.734 real 0m11.755s 00:11:07.734 user 0m19.636s 00:11:07.734 sys 0m1.586s 00:11:07.734 14:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.734 ************************************ 00:11:07.734 END TEST raid_state_function_test_sb 00:11:07.734 ************************************ 00:11:07.734 14:37:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.734 14:37:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:11:07.734 14:37:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:07.734 14:37:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:07.734 14:37:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.735 ************************************ 00:11:07.735 START TEST raid_superblock_test 00:11:07.735 ************************************ 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65074 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65074 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65074 ']' 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:07.735 14:37:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.735 [2024-11-04 14:37:06.561873] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:11:07.735 [2024-11-04 14:37:06.562122] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65074 ] 00:11:07.735 [2024-11-04 14:37:06.749467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.994 [2024-11-04 14:37:06.876756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.994 [2024-11-04 14:37:07.066819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:07.994 [2024-11-04 14:37:07.066895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.561 malloc1 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.561 [2024-11-04 14:37:07.562327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:08.561 [2024-11-04 14:37:07.562406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.561 [2024-11-04 14:37:07.562457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:08.561 [2024-11-04 14:37:07.562472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.561 [2024-11-04 14:37:07.565277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.561 [2024-11-04 14:37:07.565525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:08.561 pt1 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.561 malloc2 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.561 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.561 [2024-11-04 14:37:07.612918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:08.561 [2024-11-04 14:37:07.613046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.561 [2024-11-04 14:37:07.613085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:08.561 [2024-11-04 14:37:07.613099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.561 [2024-11-04 14:37:07.615973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.561 [2024-11-04 14:37:07.616053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:08.562 pt2 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.562 malloc3 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.562 [2024-11-04 14:37:07.675801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:08.562 [2024-11-04 14:37:07.675882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.562 [2024-11-04 14:37:07.675915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:08.562 [2024-11-04 14:37:07.675930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.562 [2024-11-04 14:37:07.678982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.562 [2024-11-04 14:37:07.679039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:08.562 pt3 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.562 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.821 [2024-11-04 14:37:07.683941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:08.821 [2024-11-04 14:37:07.686476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:08.821 [2024-11-04 14:37:07.686577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:08.821 [2024-11-04 14:37:07.686784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:08.821 [2024-11-04 14:37:07.686807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:08.821 [2024-11-04 14:37:07.687151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:08.821 [2024-11-04 14:37:07.687377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:08.821 [2024-11-04 14:37:07.687394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:08.821 [2024-11-04 14:37:07.687578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.821 "name": "raid_bdev1", 00:11:08.821 "uuid": "c6b547a3-d312-4e9a-aaa5-e97846f6b025", 00:11:08.821 "strip_size_kb": 64, 00:11:08.821 "state": "online", 00:11:08.821 "raid_level": "raid0", 00:11:08.821 "superblock": true, 00:11:08.821 "num_base_bdevs": 3, 00:11:08.821 "num_base_bdevs_discovered": 3, 00:11:08.821 "num_base_bdevs_operational": 3, 00:11:08.821 "base_bdevs_list": [ 00:11:08.821 { 00:11:08.821 "name": "pt1", 00:11:08.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.821 "is_configured": true, 00:11:08.821 "data_offset": 2048, 00:11:08.821 "data_size": 63488 00:11:08.821 }, 00:11:08.821 { 00:11:08.821 "name": "pt2", 00:11:08.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.821 "is_configured": true, 00:11:08.821 "data_offset": 2048, 00:11:08.821 "data_size": 63488 00:11:08.821 }, 00:11:08.821 { 00:11:08.821 "name": "pt3", 00:11:08.821 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.821 "is_configured": true, 00:11:08.821 "data_offset": 2048, 00:11:08.821 "data_size": 63488 00:11:08.821 } 00:11:08.821 ] 00:11:08.821 }' 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.821 14:37:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.080 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:09.080 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:09.080 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.080 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.080 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.080 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.368 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.368 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.368 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.368 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.368 [2024-11-04 14:37:08.208474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.368 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.368 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.368 "name": "raid_bdev1", 00:11:09.368 "aliases": [ 00:11:09.368 "c6b547a3-d312-4e9a-aaa5-e97846f6b025" 00:11:09.368 ], 00:11:09.368 "product_name": "Raid Volume", 00:11:09.368 "block_size": 512, 00:11:09.368 "num_blocks": 190464, 00:11:09.368 "uuid": "c6b547a3-d312-4e9a-aaa5-e97846f6b025", 00:11:09.368 "assigned_rate_limits": { 00:11:09.368 "rw_ios_per_sec": 0, 00:11:09.368 "rw_mbytes_per_sec": 0, 00:11:09.368 "r_mbytes_per_sec": 0, 00:11:09.368 "w_mbytes_per_sec": 0 00:11:09.368 }, 00:11:09.368 "claimed": false, 00:11:09.368 "zoned": false, 00:11:09.368 "supported_io_types": { 00:11:09.368 "read": true, 00:11:09.368 "write": true, 00:11:09.368 "unmap": true, 00:11:09.368 "flush": true, 00:11:09.368 "reset": true, 00:11:09.368 "nvme_admin": false, 00:11:09.368 "nvme_io": false, 00:11:09.368 "nvme_io_md": false, 00:11:09.368 "write_zeroes": true, 00:11:09.368 "zcopy": false, 00:11:09.368 "get_zone_info": false, 00:11:09.368 "zone_management": false, 00:11:09.368 "zone_append": false, 00:11:09.368 "compare": false, 00:11:09.368 "compare_and_write": false, 00:11:09.368 "abort": false, 00:11:09.368 "seek_hole": false, 00:11:09.368 "seek_data": false, 00:11:09.368 "copy": false, 00:11:09.368 "nvme_iov_md": false 00:11:09.368 }, 00:11:09.368 "memory_domains": [ 00:11:09.368 { 00:11:09.368 "dma_device_id": "system", 00:11:09.368 "dma_device_type": 1 00:11:09.368 }, 00:11:09.368 { 00:11:09.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.368 "dma_device_type": 2 00:11:09.368 }, 00:11:09.368 { 00:11:09.368 "dma_device_id": "system", 00:11:09.368 "dma_device_type": 1 00:11:09.368 }, 00:11:09.368 { 00:11:09.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.368 "dma_device_type": 2 00:11:09.368 }, 00:11:09.368 { 00:11:09.368 "dma_device_id": "system", 00:11:09.368 "dma_device_type": 1 00:11:09.368 }, 00:11:09.368 { 00:11:09.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.368 "dma_device_type": 2 00:11:09.368 } 00:11:09.368 ], 00:11:09.368 "driver_specific": { 00:11:09.368 "raid": { 00:11:09.368 "uuid": "c6b547a3-d312-4e9a-aaa5-e97846f6b025", 00:11:09.368 "strip_size_kb": 64, 00:11:09.368 "state": "online", 00:11:09.368 "raid_level": "raid0", 00:11:09.368 "superblock": true, 00:11:09.368 "num_base_bdevs": 3, 00:11:09.368 "num_base_bdevs_discovered": 3, 00:11:09.368 "num_base_bdevs_operational": 3, 00:11:09.368 "base_bdevs_list": [ 00:11:09.368 { 00:11:09.368 "name": "pt1", 00:11:09.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.368 "is_configured": true, 00:11:09.368 "data_offset": 2048, 00:11:09.368 "data_size": 63488 00:11:09.368 }, 00:11:09.368 { 00:11:09.368 "name": "pt2", 00:11:09.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.368 "is_configured": true, 00:11:09.368 "data_offset": 2048, 00:11:09.368 "data_size": 63488 00:11:09.368 }, 00:11:09.368 { 00:11:09.368 "name": "pt3", 00:11:09.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.368 "is_configured": true, 00:11:09.368 "data_offset": 2048, 00:11:09.368 "data_size": 63488 00:11:09.368 } 00:11:09.368 ] 00:11:09.368 } 00:11:09.368 } 00:11:09.368 }' 00:11:09.368 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.368 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:09.368 pt2 00:11:09.368 pt3' 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.369 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.629 [2024-11-04 14:37:08.544561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c6b547a3-d312-4e9a-aaa5-e97846f6b025 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c6b547a3-d312-4e9a-aaa5-e97846f6b025 ']' 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.629 [2024-11-04 14:37:08.596201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.629 [2024-11-04 14:37:08.596238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.629 [2024-11-04 14:37:08.596379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.629 [2024-11-04 14:37:08.596456] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.629 [2024-11-04 14:37:08.596471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.629 [2024-11-04 14:37:08.740330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:09.629 [2024-11-04 14:37:08.742950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:09.629 [2024-11-04 14:37:08.743188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:09.629 [2024-11-04 14:37:08.743391] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:09.629 [2024-11-04 14:37:08.743598] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:09.629 [2024-11-04 14:37:08.743759] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:09.629 [2024-11-04 14:37:08.743977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.629 [2024-11-04 14:37:08.744117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:09.629 request: 00:11:09.629 { 00:11:09.629 "name": "raid_bdev1", 00:11:09.629 "raid_level": "raid0", 00:11:09.629 "base_bdevs": [ 00:11:09.629 "malloc1", 00:11:09.629 "malloc2", 00:11:09.629 "malloc3" 00:11:09.629 ], 00:11:09.629 "strip_size_kb": 64, 00:11:09.629 "superblock": false, 00:11:09.629 "method": "bdev_raid_create", 00:11:09.629 "req_id": 1 00:11:09.629 } 00:11:09.629 Got JSON-RPC error response 00:11:09.629 response: 00:11:09.629 { 00:11:09.629 "code": -17, 00:11:09.629 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:09.629 } 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:09.629 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:09.630 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:09.630 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.888 [2024-11-04 14:37:08.800406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:09.888 [2024-11-04 14:37:08.800605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.888 [2024-11-04 14:37:08.800682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:09.888 [2024-11-04 14:37:08.800796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.888 [2024-11-04 14:37:08.803725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.888 [2024-11-04 14:37:08.803883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:09.888 [2024-11-04 14:37:08.804118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:09.888 [2024-11-04 14:37:08.804289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:09.888 pt1 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.888 "name": "raid_bdev1", 00:11:09.888 "uuid": "c6b547a3-d312-4e9a-aaa5-e97846f6b025", 00:11:09.888 "strip_size_kb": 64, 00:11:09.888 "state": "configuring", 00:11:09.888 "raid_level": "raid0", 00:11:09.888 "superblock": true, 00:11:09.888 "num_base_bdevs": 3, 00:11:09.888 "num_base_bdevs_discovered": 1, 00:11:09.888 "num_base_bdevs_operational": 3, 00:11:09.888 "base_bdevs_list": [ 00:11:09.888 { 00:11:09.888 "name": "pt1", 00:11:09.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:09.888 "is_configured": true, 00:11:09.888 "data_offset": 2048, 00:11:09.888 "data_size": 63488 00:11:09.888 }, 00:11:09.888 { 00:11:09.888 "name": null, 00:11:09.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:09.888 "is_configured": false, 00:11:09.888 "data_offset": 2048, 00:11:09.888 "data_size": 63488 00:11:09.888 }, 00:11:09.888 { 00:11:09.888 "name": null, 00:11:09.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:09.888 "is_configured": false, 00:11:09.888 "data_offset": 2048, 00:11:09.888 "data_size": 63488 00:11:09.888 } 00:11:09.888 ] 00:11:09.888 }' 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.888 14:37:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.454 [2024-11-04 14:37:09.312825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.454 [2024-11-04 14:37:09.312913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.454 [2024-11-04 14:37:09.312981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:10.454 [2024-11-04 14:37:09.312999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.454 [2024-11-04 14:37:09.313543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.454 [2024-11-04 14:37:09.313575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.454 [2024-11-04 14:37:09.313683] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:10.454 [2024-11-04 14:37:09.313715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:10.454 pt2 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.454 [2024-11-04 14:37:09.320814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:11:10.454 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.455 "name": "raid_bdev1", 00:11:10.455 "uuid": "c6b547a3-d312-4e9a-aaa5-e97846f6b025", 00:11:10.455 "strip_size_kb": 64, 00:11:10.455 "state": "configuring", 00:11:10.455 "raid_level": "raid0", 00:11:10.455 "superblock": true, 00:11:10.455 "num_base_bdevs": 3, 00:11:10.455 "num_base_bdevs_discovered": 1, 00:11:10.455 "num_base_bdevs_operational": 3, 00:11:10.455 "base_bdevs_list": [ 00:11:10.455 { 00:11:10.455 "name": "pt1", 00:11:10.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:10.455 "is_configured": true, 00:11:10.455 "data_offset": 2048, 00:11:10.455 "data_size": 63488 00:11:10.455 }, 00:11:10.455 { 00:11:10.455 "name": null, 00:11:10.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:10.455 "is_configured": false, 00:11:10.455 "data_offset": 0, 00:11:10.455 "data_size": 63488 00:11:10.455 }, 00:11:10.455 { 00:11:10.455 "name": null, 00:11:10.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:10.455 "is_configured": false, 00:11:10.455 "data_offset": 2048, 00:11:10.455 "data_size": 63488 00:11:10.455 } 00:11:10.455 ] 00:11:10.455 }' 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.455 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.021 [2024-11-04 14:37:09.840966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:11.021 [2024-11-04 14:37:09.841197] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.021 [2024-11-04 14:37:09.841235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:11.021 [2024-11-04 14:37:09.841254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.021 [2024-11-04 14:37:09.841859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.021 [2024-11-04 14:37:09.841890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:11.021 [2024-11-04 14:37:09.842021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:11.021 [2024-11-04 14:37:09.842059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.021 pt2 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.021 [2024-11-04 14:37:09.848940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:11.021 [2024-11-04 14:37:09.849004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.021 [2024-11-04 14:37:09.849026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:11.021 [2024-11-04 14:37:09.849043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.021 [2024-11-04 14:37:09.849497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.021 [2024-11-04 14:37:09.849535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:11.021 [2024-11-04 14:37:09.849609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:11.021 [2024-11-04 14:37:09.849657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:11.021 [2024-11-04 14:37:09.849799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:11.021 [2024-11-04 14:37:09.849820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:11.021 [2024-11-04 14:37:09.850156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:11.021 [2024-11-04 14:37:09.850341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:11.021 [2024-11-04 14:37:09.850364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:11.021 [2024-11-04 14:37:09.850526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.021 pt3 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.021 "name": "raid_bdev1", 00:11:11.021 "uuid": "c6b547a3-d312-4e9a-aaa5-e97846f6b025", 00:11:11.021 "strip_size_kb": 64, 00:11:11.021 "state": "online", 00:11:11.021 "raid_level": "raid0", 00:11:11.021 "superblock": true, 00:11:11.021 "num_base_bdevs": 3, 00:11:11.021 "num_base_bdevs_discovered": 3, 00:11:11.021 "num_base_bdevs_operational": 3, 00:11:11.021 "base_bdevs_list": [ 00:11:11.021 { 00:11:11.021 "name": "pt1", 00:11:11.021 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.021 "is_configured": true, 00:11:11.021 "data_offset": 2048, 00:11:11.021 "data_size": 63488 00:11:11.021 }, 00:11:11.021 { 00:11:11.021 "name": "pt2", 00:11:11.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.021 "is_configured": true, 00:11:11.021 "data_offset": 2048, 00:11:11.021 "data_size": 63488 00:11:11.021 }, 00:11:11.021 { 00:11:11.021 "name": "pt3", 00:11:11.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.021 "is_configured": true, 00:11:11.021 "data_offset": 2048, 00:11:11.021 "data_size": 63488 00:11:11.021 } 00:11:11.021 ] 00:11:11.021 }' 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.021 14:37:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.279 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:11.279 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:11.279 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.279 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.279 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.279 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.279 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.279 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.279 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.279 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.279 [2024-11-04 14:37:10.393573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.537 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.537 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.537 "name": "raid_bdev1", 00:11:11.537 "aliases": [ 00:11:11.537 "c6b547a3-d312-4e9a-aaa5-e97846f6b025" 00:11:11.537 ], 00:11:11.537 "product_name": "Raid Volume", 00:11:11.537 "block_size": 512, 00:11:11.537 "num_blocks": 190464, 00:11:11.537 "uuid": "c6b547a3-d312-4e9a-aaa5-e97846f6b025", 00:11:11.537 "assigned_rate_limits": { 00:11:11.537 "rw_ios_per_sec": 0, 00:11:11.537 "rw_mbytes_per_sec": 0, 00:11:11.537 "r_mbytes_per_sec": 0, 00:11:11.537 "w_mbytes_per_sec": 0 00:11:11.537 }, 00:11:11.537 "claimed": false, 00:11:11.537 "zoned": false, 00:11:11.537 "supported_io_types": { 00:11:11.537 "read": true, 00:11:11.537 "write": true, 00:11:11.537 "unmap": true, 00:11:11.537 "flush": true, 00:11:11.537 "reset": true, 00:11:11.537 "nvme_admin": false, 00:11:11.537 "nvme_io": false, 00:11:11.537 "nvme_io_md": false, 00:11:11.537 "write_zeroes": true, 00:11:11.537 "zcopy": false, 00:11:11.537 "get_zone_info": false, 00:11:11.537 "zone_management": false, 00:11:11.537 "zone_append": false, 00:11:11.537 "compare": false, 00:11:11.537 "compare_and_write": false, 00:11:11.537 "abort": false, 00:11:11.537 "seek_hole": false, 00:11:11.537 "seek_data": false, 00:11:11.537 "copy": false, 00:11:11.537 "nvme_iov_md": false 00:11:11.537 }, 00:11:11.537 "memory_domains": [ 00:11:11.537 { 00:11:11.537 "dma_device_id": "system", 00:11:11.537 "dma_device_type": 1 00:11:11.537 }, 00:11:11.537 { 00:11:11.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.537 "dma_device_type": 2 00:11:11.537 }, 00:11:11.537 { 00:11:11.537 "dma_device_id": "system", 00:11:11.537 "dma_device_type": 1 00:11:11.537 }, 00:11:11.537 { 00:11:11.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.537 "dma_device_type": 2 00:11:11.537 }, 00:11:11.537 { 00:11:11.537 "dma_device_id": "system", 00:11:11.537 "dma_device_type": 1 00:11:11.537 }, 00:11:11.537 { 00:11:11.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.537 "dma_device_type": 2 00:11:11.537 } 00:11:11.537 ], 00:11:11.537 "driver_specific": { 00:11:11.537 "raid": { 00:11:11.537 "uuid": "c6b547a3-d312-4e9a-aaa5-e97846f6b025", 00:11:11.537 "strip_size_kb": 64, 00:11:11.537 "state": "online", 00:11:11.537 "raid_level": "raid0", 00:11:11.537 "superblock": true, 00:11:11.537 "num_base_bdevs": 3, 00:11:11.537 "num_base_bdevs_discovered": 3, 00:11:11.537 "num_base_bdevs_operational": 3, 00:11:11.537 "base_bdevs_list": [ 00:11:11.537 { 00:11:11.537 "name": "pt1", 00:11:11.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.537 "is_configured": true, 00:11:11.537 "data_offset": 2048, 00:11:11.537 "data_size": 63488 00:11:11.537 }, 00:11:11.537 { 00:11:11.537 "name": "pt2", 00:11:11.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.537 "is_configured": true, 00:11:11.537 "data_offset": 2048, 00:11:11.537 "data_size": 63488 00:11:11.537 }, 00:11:11.537 { 00:11:11.537 "name": "pt3", 00:11:11.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.537 "is_configured": true, 00:11:11.537 "data_offset": 2048, 00:11:11.537 "data_size": 63488 00:11:11.537 } 00:11:11.537 ] 00:11:11.537 } 00:11:11.537 } 00:11:11.537 }' 00:11:11.537 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.537 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:11.538 pt2 00:11:11.538 pt3' 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.538 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.796 [2024-11-04 14:37:10.721602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c6b547a3-d312-4e9a-aaa5-e97846f6b025 '!=' c6b547a3-d312-4e9a-aaa5-e97846f6b025 ']' 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65074 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65074 ']' 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65074 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65074 00:11:11.796 killing process with pid 65074 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65074' 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65074 00:11:11.796 [2024-11-04 14:37:10.802076] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.796 14:37:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65074 00:11:11.796 [2024-11-04 14:37:10.802209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.796 [2024-11-04 14:37:10.802286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.796 [2024-11-04 14:37:10.802306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:12.053 [2024-11-04 14:37:11.075090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.988 ************************************ 00:11:12.988 END TEST raid_superblock_test 00:11:12.988 ************************************ 00:11:12.988 14:37:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:12.988 00:11:12.988 real 0m5.614s 00:11:12.988 user 0m8.472s 00:11:12.988 sys 0m0.845s 00:11:12.988 14:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:12.988 14:37:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.248 14:37:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:11:13.248 14:37:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:13.248 14:37:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:13.248 14:37:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.248 ************************************ 00:11:13.248 START TEST raid_read_error_test 00:11:13.248 ************************************ 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EkkLQ789vp 00:11:13.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65334 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65334 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65334 ']' 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:13.248 14:37:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.248 [2024-11-04 14:37:12.245445] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:11:13.248 [2024-11-04 14:37:12.245643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65334 ] 00:11:13.507 [2024-11-04 14:37:12.430771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.507 [2024-11-04 14:37:12.553987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.765 [2024-11-04 14:37:12.750187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.765 [2024-11-04 14:37:12.750609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.333 BaseBdev1_malloc 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.333 true 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.333 [2024-11-04 14:37:13.291442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:14.333 [2024-11-04 14:37:13.291511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.333 [2024-11-04 14:37:13.291541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:14.333 [2024-11-04 14:37:13.291559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.333 [2024-11-04 14:37:13.294333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.333 [2024-11-04 14:37:13.294383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:14.333 BaseBdev1 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.333 BaseBdev2_malloc 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:14.333 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.334 true 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.334 [2024-11-04 14:37:13.348166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:14.334 [2024-11-04 14:37:13.348236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.334 [2024-11-04 14:37:13.348265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:14.334 [2024-11-04 14:37:13.348282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.334 [2024-11-04 14:37:13.351065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.334 [2024-11-04 14:37:13.351121] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:14.334 BaseBdev2 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.334 BaseBdev3_malloc 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.334 true 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.334 [2024-11-04 14:37:13.414738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:14.334 [2024-11-04 14:37:13.414820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.334 [2024-11-04 14:37:13.414847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:14.334 [2024-11-04 14:37:13.414865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.334 [2024-11-04 14:37:13.417749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.334 [2024-11-04 14:37:13.417948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:14.334 BaseBdev3 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.334 [2024-11-04 14:37:13.422864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.334 [2024-11-04 14:37:13.425595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.334 [2024-11-04 14:37:13.425868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.334 [2024-11-04 14:37:13.426170] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:14.334 [2024-11-04 14:37:13.426192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:14.334 [2024-11-04 14:37:13.426510] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:14.334 [2024-11-04 14:37:13.426766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:14.334 [2024-11-04 14:37:13.426789] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:14.334 [2024-11-04 14:37:13.427050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.334 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.593 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.593 "name": "raid_bdev1", 00:11:14.593 "uuid": "ba121ba7-3ea7-40cc-ac9b-005f003a677e", 00:11:14.593 "strip_size_kb": 64, 00:11:14.593 "state": "online", 00:11:14.593 "raid_level": "raid0", 00:11:14.593 "superblock": true, 00:11:14.593 "num_base_bdevs": 3, 00:11:14.593 "num_base_bdevs_discovered": 3, 00:11:14.593 "num_base_bdevs_operational": 3, 00:11:14.593 "base_bdevs_list": [ 00:11:14.593 { 00:11:14.593 "name": "BaseBdev1", 00:11:14.593 "uuid": "f4aeede5-e078-54c3-883a-0dade8d65757", 00:11:14.593 "is_configured": true, 00:11:14.593 "data_offset": 2048, 00:11:14.593 "data_size": 63488 00:11:14.593 }, 00:11:14.593 { 00:11:14.593 "name": "BaseBdev2", 00:11:14.593 "uuid": "72567a0f-4519-5db1-83a8-63324e390191", 00:11:14.593 "is_configured": true, 00:11:14.593 "data_offset": 2048, 00:11:14.593 "data_size": 63488 00:11:14.593 }, 00:11:14.593 { 00:11:14.593 "name": "BaseBdev3", 00:11:14.593 "uuid": "1a0f9beb-b44b-5cd1-bf8a-d0232d699aa0", 00:11:14.593 "is_configured": true, 00:11:14.593 "data_offset": 2048, 00:11:14.593 "data_size": 63488 00:11:14.593 } 00:11:14.593 ] 00:11:14.593 }' 00:11:14.593 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.593 14:37:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.852 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:14.852 14:37:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:15.111 [2024-11-04 14:37:14.052673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.047 14:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.048 14:37:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.048 14:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.048 "name": "raid_bdev1", 00:11:16.048 "uuid": "ba121ba7-3ea7-40cc-ac9b-005f003a677e", 00:11:16.048 "strip_size_kb": 64, 00:11:16.048 "state": "online", 00:11:16.048 "raid_level": "raid0", 00:11:16.048 "superblock": true, 00:11:16.048 "num_base_bdevs": 3, 00:11:16.048 "num_base_bdevs_discovered": 3, 00:11:16.048 "num_base_bdevs_operational": 3, 00:11:16.048 "base_bdevs_list": [ 00:11:16.048 { 00:11:16.048 "name": "BaseBdev1", 00:11:16.048 "uuid": "f4aeede5-e078-54c3-883a-0dade8d65757", 00:11:16.048 "is_configured": true, 00:11:16.048 "data_offset": 2048, 00:11:16.048 "data_size": 63488 00:11:16.048 }, 00:11:16.048 { 00:11:16.048 "name": "BaseBdev2", 00:11:16.048 "uuid": "72567a0f-4519-5db1-83a8-63324e390191", 00:11:16.048 "is_configured": true, 00:11:16.048 "data_offset": 2048, 00:11:16.048 "data_size": 63488 00:11:16.048 }, 00:11:16.048 { 00:11:16.048 "name": "BaseBdev3", 00:11:16.048 "uuid": "1a0f9beb-b44b-5cd1-bf8a-d0232d699aa0", 00:11:16.048 "is_configured": true, 00:11:16.048 "data_offset": 2048, 00:11:16.048 "data_size": 63488 00:11:16.048 } 00:11:16.048 ] 00:11:16.048 }' 00:11:16.048 14:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.048 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.615 [2024-11-04 14:37:15.507102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:16.615 [2024-11-04 14:37:15.507134] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.615 [2024-11-04 14:37:15.510417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.615 [2024-11-04 14:37:15.510473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.615 [2024-11-04 14:37:15.510526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:16.615 [2024-11-04 14:37:15.510541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:16.615 { 00:11:16.615 "results": [ 00:11:16.615 { 00:11:16.615 "job": "raid_bdev1", 00:11:16.615 "core_mask": "0x1", 00:11:16.615 "workload": "randrw", 00:11:16.615 "percentage": 50, 00:11:16.615 "status": "finished", 00:11:16.615 "queue_depth": 1, 00:11:16.615 "io_size": 131072, 00:11:16.615 "runtime": 1.451915, 00:11:16.615 "iops": 11117.041975597745, 00:11:16.615 "mibps": 1389.6302469497182, 00:11:16.615 "io_failed": 1, 00:11:16.615 "io_timeout": 0, 00:11:16.615 "avg_latency_us": 125.68563093454681, 00:11:16.615 "min_latency_us": 35.84, 00:11:16.615 "max_latency_us": 1697.9781818181818 00:11:16.615 } 00:11:16.615 ], 00:11:16.615 "core_count": 1 00:11:16.615 } 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65334 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65334 ']' 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65334 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65334 00:11:16.615 killing process with pid 65334 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65334' 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65334 00:11:16.615 [2024-11-04 14:37:15.550431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:16.615 14:37:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65334 00:11:16.874 [2024-11-04 14:37:15.744652] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.811 14:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EkkLQ789vp 00:11:17.811 14:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:17.811 14:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:17.811 14:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:11:17.811 14:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:17.811 14:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:17.811 14:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:17.811 14:37:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:11:17.811 00:11:17.811 real 0m4.716s 00:11:17.811 user 0m5.862s 00:11:17.811 sys 0m0.584s 00:11:17.811 14:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:17.811 14:37:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.811 ************************************ 00:11:17.811 END TEST raid_read_error_test 00:11:17.811 ************************************ 00:11:17.811 14:37:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:11:17.811 14:37:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:17.811 14:37:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.811 14:37:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:17.811 ************************************ 00:11:17.811 START TEST raid_write_error_test 00:11:17.811 ************************************ 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.811 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.n8OLs7Sc5O 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65475 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65475 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65475 ']' 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:17.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:17.812 14:37:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.071 [2024-11-04 14:37:17.027445] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:11:18.071 [2024-11-04 14:37:17.027615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65475 ] 00:11:18.330 [2024-11-04 14:37:17.213778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.330 [2024-11-04 14:37:17.342331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.589 [2024-11-04 14:37:17.546445] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.589 [2024-11-04 14:37:17.546532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.158 BaseBdev1_malloc 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.158 true 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.158 [2024-11-04 14:37:18.061290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:19.158 [2024-11-04 14:37:18.061532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.158 [2024-11-04 14:37:18.061570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:19.158 [2024-11-04 14:37:18.061589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.158 [2024-11-04 14:37:18.064528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.158 [2024-11-04 14:37:18.064589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:19.158 BaseBdev1 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.158 BaseBdev2_malloc 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.158 true 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.158 [2024-11-04 14:37:18.119817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:19.158 [2024-11-04 14:37:18.120085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.158 [2024-11-04 14:37:18.120120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:19.158 [2024-11-04 14:37:18.120138] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.158 [2024-11-04 14:37:18.122930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.158 [2024-11-04 14:37:18.123183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:19.158 BaseBdev2 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.158 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.159 BaseBdev3_malloc 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.159 true 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.159 [2024-11-04 14:37:18.200095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:19.159 [2024-11-04 14:37:18.200179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.159 [2024-11-04 14:37:18.200215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:19.159 [2024-11-04 14:37:18.200231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.159 [2024-11-04 14:37:18.203096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.159 [2024-11-04 14:37:18.203154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:19.159 BaseBdev3 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.159 [2024-11-04 14:37:18.208195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.159 [2024-11-04 14:37:18.210684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.159 [2024-11-04 14:37:18.210970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.159 [2024-11-04 14:37:18.211415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:19.159 [2024-11-04 14:37:18.211569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:19.159 [2024-11-04 14:37:18.211910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:19.159 [2024-11-04 14:37:18.212219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:19.159 [2024-11-04 14:37:18.212244] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:19.159 [2024-11-04 14:37:18.212503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.159 "name": "raid_bdev1", 00:11:19.159 "uuid": "e457a05b-4c6d-4675-ba3f-84e60aa32833", 00:11:19.159 "strip_size_kb": 64, 00:11:19.159 "state": "online", 00:11:19.159 "raid_level": "raid0", 00:11:19.159 "superblock": true, 00:11:19.159 "num_base_bdevs": 3, 00:11:19.159 "num_base_bdevs_discovered": 3, 00:11:19.159 "num_base_bdevs_operational": 3, 00:11:19.159 "base_bdevs_list": [ 00:11:19.159 { 00:11:19.159 "name": "BaseBdev1", 00:11:19.159 "uuid": "b33efdbf-2ba2-5df6-813e-e7678418d86e", 00:11:19.159 "is_configured": true, 00:11:19.159 "data_offset": 2048, 00:11:19.159 "data_size": 63488 00:11:19.159 }, 00:11:19.159 { 00:11:19.159 "name": "BaseBdev2", 00:11:19.159 "uuid": "9530e319-155b-5725-8b96-311f5d367157", 00:11:19.159 "is_configured": true, 00:11:19.159 "data_offset": 2048, 00:11:19.159 "data_size": 63488 00:11:19.159 }, 00:11:19.159 { 00:11:19.159 "name": "BaseBdev3", 00:11:19.159 "uuid": "5ccc368a-665a-561a-b27e-012b953c20e2", 00:11:19.159 "is_configured": true, 00:11:19.159 "data_offset": 2048, 00:11:19.159 "data_size": 63488 00:11:19.159 } 00:11:19.159 ] 00:11:19.159 }' 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.159 14:37:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.726 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:19.726 14:37:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:19.984 [2024-11-04 14:37:18.886000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.920 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.920 "name": "raid_bdev1", 00:11:20.920 "uuid": "e457a05b-4c6d-4675-ba3f-84e60aa32833", 00:11:20.920 "strip_size_kb": 64, 00:11:20.920 "state": "online", 00:11:20.920 "raid_level": "raid0", 00:11:20.920 "superblock": true, 00:11:20.920 "num_base_bdevs": 3, 00:11:20.920 "num_base_bdevs_discovered": 3, 00:11:20.920 "num_base_bdevs_operational": 3, 00:11:20.920 "base_bdevs_list": [ 00:11:20.920 { 00:11:20.920 "name": "BaseBdev1", 00:11:20.920 "uuid": "b33efdbf-2ba2-5df6-813e-e7678418d86e", 00:11:20.920 "is_configured": true, 00:11:20.921 "data_offset": 2048, 00:11:20.921 "data_size": 63488 00:11:20.921 }, 00:11:20.921 { 00:11:20.921 "name": "BaseBdev2", 00:11:20.921 "uuid": "9530e319-155b-5725-8b96-311f5d367157", 00:11:20.921 "is_configured": true, 00:11:20.921 "data_offset": 2048, 00:11:20.921 "data_size": 63488 00:11:20.921 }, 00:11:20.921 { 00:11:20.921 "name": "BaseBdev3", 00:11:20.921 "uuid": "5ccc368a-665a-561a-b27e-012b953c20e2", 00:11:20.921 "is_configured": true, 00:11:20.921 "data_offset": 2048, 00:11:20.921 "data_size": 63488 00:11:20.921 } 00:11:20.921 ] 00:11:20.921 }' 00:11:20.921 14:37:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.921 14:37:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.180 14:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:21.180 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.180 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.180 [2024-11-04 14:37:20.276649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.180 [2024-11-04 14:37:20.276853] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.180 [2024-11-04 14:37:20.280587] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.180 [2024-11-04 14:37:20.280869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.180 [2024-11-04 14:37:20.281061] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.180 [2024-11-04 14:37:20.281271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:21.180 { 00:11:21.180 "results": [ 00:11:21.180 { 00:11:21.180 "job": "raid_bdev1", 00:11:21.180 "core_mask": "0x1", 00:11:21.180 "workload": "randrw", 00:11:21.180 "percentage": 50, 00:11:21.180 "status": "finished", 00:11:21.180 "queue_depth": 1, 00:11:21.180 "io_size": 131072, 00:11:21.180 "runtime": 1.388506, 00:11:21.180 "iops": 10891.562585973701, 00:11:21.180 "mibps": 1361.4453232467126, 00:11:21.180 "io_failed": 1, 00:11:21.180 "io_timeout": 0, 00:11:21.180 "avg_latency_us": 127.74394075641364, 00:11:21.180 "min_latency_us": 26.181818181818183, 00:11:21.180 "max_latency_us": 1884.16 00:11:21.180 } 00:11:21.180 ], 00:11:21.180 "core_count": 1 00:11:21.180 } 00:11:21.180 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.180 14:37:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65475 00:11:21.180 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65475 ']' 00:11:21.180 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65475 00:11:21.180 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:21.180 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:21.180 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65475 00:11:21.439 killing process with pid 65475 00:11:21.439 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:21.439 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:21.439 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65475' 00:11:21.439 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65475 00:11:21.439 [2024-11-04 14:37:20.316865] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.439 14:37:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65475 00:11:21.439 [2024-11-04 14:37:20.544457] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.815 14:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.n8OLs7Sc5O 00:11:22.815 14:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:22.815 14:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:22.815 14:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:22.815 14:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:22.815 14:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.815 14:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:22.815 14:37:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:22.815 00:11:22.815 real 0m4.724s 00:11:22.815 user 0m5.842s 00:11:22.815 sys 0m0.610s 00:11:22.815 14:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:22.815 ************************************ 00:11:22.815 END TEST raid_write_error_test 00:11:22.815 ************************************ 00:11:22.815 14:37:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.815 14:37:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:22.815 14:37:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:22.815 14:37:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:22.815 14:37:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:22.815 14:37:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.815 ************************************ 00:11:22.815 START TEST raid_state_function_test 00:11:22.815 ************************************ 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:22.815 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65623 00:11:22.816 Process raid pid: 65623 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65623' 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65623 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65623 ']' 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:22.816 14:37:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.816 [2024-11-04 14:37:21.769821] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:11:22.816 [2024-11-04 14:37:21.770003] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.075 [2024-11-04 14:37:21.945420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.075 [2024-11-04 14:37:22.072387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.333 [2024-11-04 14:37:22.277225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.333 [2024-11-04 14:37:22.277278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.900 14:37:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:23.900 14:37:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:23.900 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:23.900 14:37:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.900 14:37:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.900 [2024-11-04 14:37:22.755334] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.900 [2024-11-04 14:37:22.755405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.900 [2024-11-04 14:37:22.755420] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.900 [2024-11-04 14:37:22.755434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.900 [2024-11-04 14:37:22.755443] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.900 [2024-11-04 14:37:22.755456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.900 14:37:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.900 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:23.900 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.901 "name": "Existed_Raid", 00:11:23.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.901 "strip_size_kb": 64, 00:11:23.901 "state": "configuring", 00:11:23.901 "raid_level": "concat", 00:11:23.901 "superblock": false, 00:11:23.901 "num_base_bdevs": 3, 00:11:23.901 "num_base_bdevs_discovered": 0, 00:11:23.901 "num_base_bdevs_operational": 3, 00:11:23.901 "base_bdevs_list": [ 00:11:23.901 { 00:11:23.901 "name": "BaseBdev1", 00:11:23.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.901 "is_configured": false, 00:11:23.901 "data_offset": 0, 00:11:23.901 "data_size": 0 00:11:23.901 }, 00:11:23.901 { 00:11:23.901 "name": "BaseBdev2", 00:11:23.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.901 "is_configured": false, 00:11:23.901 "data_offset": 0, 00:11:23.901 "data_size": 0 00:11:23.901 }, 00:11:23.901 { 00:11:23.901 "name": "BaseBdev3", 00:11:23.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.901 "is_configured": false, 00:11:23.901 "data_offset": 0, 00:11:23.901 "data_size": 0 00:11:23.901 } 00:11:23.901 ] 00:11:23.901 }' 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.901 14:37:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.160 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.160 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.160 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.419 [2024-11-04 14:37:23.283433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.419 [2024-11-04 14:37:23.283479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.419 [2024-11-04 14:37:23.291404] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:24.419 [2024-11-04 14:37:23.291468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:24.419 [2024-11-04 14:37:23.291481] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.419 [2024-11-04 14:37:23.291495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.419 [2024-11-04 14:37:23.291503] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.419 [2024-11-04 14:37:23.291516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.419 [2024-11-04 14:37:23.334590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.419 BaseBdev1 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.419 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.419 [ 00:11:24.419 { 00:11:24.419 "name": "BaseBdev1", 00:11:24.419 "aliases": [ 00:11:24.419 "92702ad1-1782-485e-9d0a-2806772019fb" 00:11:24.419 ], 00:11:24.419 "product_name": "Malloc disk", 00:11:24.419 "block_size": 512, 00:11:24.419 "num_blocks": 65536, 00:11:24.419 "uuid": "92702ad1-1782-485e-9d0a-2806772019fb", 00:11:24.419 "assigned_rate_limits": { 00:11:24.419 "rw_ios_per_sec": 0, 00:11:24.419 "rw_mbytes_per_sec": 0, 00:11:24.419 "r_mbytes_per_sec": 0, 00:11:24.419 "w_mbytes_per_sec": 0 00:11:24.419 }, 00:11:24.419 "claimed": true, 00:11:24.419 "claim_type": "exclusive_write", 00:11:24.419 "zoned": false, 00:11:24.419 "supported_io_types": { 00:11:24.419 "read": true, 00:11:24.419 "write": true, 00:11:24.419 "unmap": true, 00:11:24.419 "flush": true, 00:11:24.419 "reset": true, 00:11:24.419 "nvme_admin": false, 00:11:24.419 "nvme_io": false, 00:11:24.419 "nvme_io_md": false, 00:11:24.419 "write_zeroes": true, 00:11:24.419 "zcopy": true, 00:11:24.419 "get_zone_info": false, 00:11:24.419 "zone_management": false, 00:11:24.419 "zone_append": false, 00:11:24.419 "compare": false, 00:11:24.419 "compare_and_write": false, 00:11:24.419 "abort": true, 00:11:24.419 "seek_hole": false, 00:11:24.419 "seek_data": false, 00:11:24.419 "copy": true, 00:11:24.419 "nvme_iov_md": false 00:11:24.419 }, 00:11:24.419 "memory_domains": [ 00:11:24.419 { 00:11:24.419 "dma_device_id": "system", 00:11:24.419 "dma_device_type": 1 00:11:24.419 }, 00:11:24.419 { 00:11:24.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.419 "dma_device_type": 2 00:11:24.419 } 00:11:24.419 ], 00:11:24.419 "driver_specific": {} 00:11:24.419 } 00:11:24.419 ] 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.420 "name": "Existed_Raid", 00:11:24.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.420 "strip_size_kb": 64, 00:11:24.420 "state": "configuring", 00:11:24.420 "raid_level": "concat", 00:11:24.420 "superblock": false, 00:11:24.420 "num_base_bdevs": 3, 00:11:24.420 "num_base_bdevs_discovered": 1, 00:11:24.420 "num_base_bdevs_operational": 3, 00:11:24.420 "base_bdevs_list": [ 00:11:24.420 { 00:11:24.420 "name": "BaseBdev1", 00:11:24.420 "uuid": "92702ad1-1782-485e-9d0a-2806772019fb", 00:11:24.420 "is_configured": true, 00:11:24.420 "data_offset": 0, 00:11:24.420 "data_size": 65536 00:11:24.420 }, 00:11:24.420 { 00:11:24.420 "name": "BaseBdev2", 00:11:24.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.420 "is_configured": false, 00:11:24.420 "data_offset": 0, 00:11:24.420 "data_size": 0 00:11:24.420 }, 00:11:24.420 { 00:11:24.420 "name": "BaseBdev3", 00:11:24.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.420 "is_configured": false, 00:11:24.420 "data_offset": 0, 00:11:24.420 "data_size": 0 00:11:24.420 } 00:11:24.420 ] 00:11:24.420 }' 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.420 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.989 [2024-11-04 14:37:23.894800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.989 [2024-11-04 14:37:23.894879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.989 [2024-11-04 14:37:23.902874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.989 [2024-11-04 14:37:23.905345] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:24.989 [2024-11-04 14:37:23.905413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:24.989 [2024-11-04 14:37:23.905429] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:24.989 [2024-11-04 14:37:23.905444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.989 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.989 "name": "Existed_Raid", 00:11:24.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.989 "strip_size_kb": 64, 00:11:24.989 "state": "configuring", 00:11:24.989 "raid_level": "concat", 00:11:24.989 "superblock": false, 00:11:24.989 "num_base_bdevs": 3, 00:11:24.989 "num_base_bdevs_discovered": 1, 00:11:24.989 "num_base_bdevs_operational": 3, 00:11:24.989 "base_bdevs_list": [ 00:11:24.989 { 00:11:24.989 "name": "BaseBdev1", 00:11:24.989 "uuid": "92702ad1-1782-485e-9d0a-2806772019fb", 00:11:24.989 "is_configured": true, 00:11:24.989 "data_offset": 0, 00:11:24.989 "data_size": 65536 00:11:24.989 }, 00:11:24.989 { 00:11:24.989 "name": "BaseBdev2", 00:11:24.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.989 "is_configured": false, 00:11:24.989 "data_offset": 0, 00:11:24.989 "data_size": 0 00:11:24.989 }, 00:11:24.989 { 00:11:24.989 "name": "BaseBdev3", 00:11:24.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.990 "is_configured": false, 00:11:24.990 "data_offset": 0, 00:11:24.990 "data_size": 0 00:11:24.990 } 00:11:24.990 ] 00:11:24.990 }' 00:11:24.990 14:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.990 14:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.558 [2024-11-04 14:37:24.493169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:25.558 BaseBdev2 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:25.558 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.559 [ 00:11:25.559 { 00:11:25.559 "name": "BaseBdev2", 00:11:25.559 "aliases": [ 00:11:25.559 "280b13a5-eb06-4308-85bd-6bab4da915dd" 00:11:25.559 ], 00:11:25.559 "product_name": "Malloc disk", 00:11:25.559 "block_size": 512, 00:11:25.559 "num_blocks": 65536, 00:11:25.559 "uuid": "280b13a5-eb06-4308-85bd-6bab4da915dd", 00:11:25.559 "assigned_rate_limits": { 00:11:25.559 "rw_ios_per_sec": 0, 00:11:25.559 "rw_mbytes_per_sec": 0, 00:11:25.559 "r_mbytes_per_sec": 0, 00:11:25.559 "w_mbytes_per_sec": 0 00:11:25.559 }, 00:11:25.559 "claimed": true, 00:11:25.559 "claim_type": "exclusive_write", 00:11:25.559 "zoned": false, 00:11:25.559 "supported_io_types": { 00:11:25.559 "read": true, 00:11:25.559 "write": true, 00:11:25.559 "unmap": true, 00:11:25.559 "flush": true, 00:11:25.559 "reset": true, 00:11:25.559 "nvme_admin": false, 00:11:25.559 "nvme_io": false, 00:11:25.559 "nvme_io_md": false, 00:11:25.559 "write_zeroes": true, 00:11:25.559 "zcopy": true, 00:11:25.559 "get_zone_info": false, 00:11:25.559 "zone_management": false, 00:11:25.559 "zone_append": false, 00:11:25.559 "compare": false, 00:11:25.559 "compare_and_write": false, 00:11:25.559 "abort": true, 00:11:25.559 "seek_hole": false, 00:11:25.559 "seek_data": false, 00:11:25.559 "copy": true, 00:11:25.559 "nvme_iov_md": false 00:11:25.559 }, 00:11:25.559 "memory_domains": [ 00:11:25.559 { 00:11:25.559 "dma_device_id": "system", 00:11:25.559 "dma_device_type": 1 00:11:25.559 }, 00:11:25.559 { 00:11:25.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.559 "dma_device_type": 2 00:11:25.559 } 00:11:25.559 ], 00:11:25.559 "driver_specific": {} 00:11:25.559 } 00:11:25.559 ] 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.559 "name": "Existed_Raid", 00:11:25.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.559 "strip_size_kb": 64, 00:11:25.559 "state": "configuring", 00:11:25.559 "raid_level": "concat", 00:11:25.559 "superblock": false, 00:11:25.559 "num_base_bdevs": 3, 00:11:25.559 "num_base_bdevs_discovered": 2, 00:11:25.559 "num_base_bdevs_operational": 3, 00:11:25.559 "base_bdevs_list": [ 00:11:25.559 { 00:11:25.559 "name": "BaseBdev1", 00:11:25.559 "uuid": "92702ad1-1782-485e-9d0a-2806772019fb", 00:11:25.559 "is_configured": true, 00:11:25.559 "data_offset": 0, 00:11:25.559 "data_size": 65536 00:11:25.559 }, 00:11:25.559 { 00:11:25.559 "name": "BaseBdev2", 00:11:25.559 "uuid": "280b13a5-eb06-4308-85bd-6bab4da915dd", 00:11:25.559 "is_configured": true, 00:11:25.559 "data_offset": 0, 00:11:25.559 "data_size": 65536 00:11:25.559 }, 00:11:25.559 { 00:11:25.559 "name": "BaseBdev3", 00:11:25.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.559 "is_configured": false, 00:11:25.559 "data_offset": 0, 00:11:25.559 "data_size": 0 00:11:25.559 } 00:11:25.559 ] 00:11:25.559 }' 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.559 14:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.127 [2024-11-04 14:37:25.080067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.127 [2024-11-04 14:37:25.080135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.127 [2024-11-04 14:37:25.080154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:26.127 [2024-11-04 14:37:25.080531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:26.127 [2024-11-04 14:37:25.080765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.127 [2024-11-04 14:37:25.080792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:26.127 [2024-11-04 14:37:25.081112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.127 BaseBdev3 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.127 [ 00:11:26.127 { 00:11:26.127 "name": "BaseBdev3", 00:11:26.127 "aliases": [ 00:11:26.127 "c017d7cc-0925-43fd-969e-b773a10bdae7" 00:11:26.127 ], 00:11:26.127 "product_name": "Malloc disk", 00:11:26.127 "block_size": 512, 00:11:26.127 "num_blocks": 65536, 00:11:26.127 "uuid": "c017d7cc-0925-43fd-969e-b773a10bdae7", 00:11:26.127 "assigned_rate_limits": { 00:11:26.127 "rw_ios_per_sec": 0, 00:11:26.127 "rw_mbytes_per_sec": 0, 00:11:26.127 "r_mbytes_per_sec": 0, 00:11:26.127 "w_mbytes_per_sec": 0 00:11:26.127 }, 00:11:26.127 "claimed": true, 00:11:26.127 "claim_type": "exclusive_write", 00:11:26.127 "zoned": false, 00:11:26.127 "supported_io_types": { 00:11:26.127 "read": true, 00:11:26.127 "write": true, 00:11:26.127 "unmap": true, 00:11:26.127 "flush": true, 00:11:26.127 "reset": true, 00:11:26.127 "nvme_admin": false, 00:11:26.127 "nvme_io": false, 00:11:26.127 "nvme_io_md": false, 00:11:26.127 "write_zeroes": true, 00:11:26.127 "zcopy": true, 00:11:26.127 "get_zone_info": false, 00:11:26.127 "zone_management": false, 00:11:26.127 "zone_append": false, 00:11:26.127 "compare": false, 00:11:26.127 "compare_and_write": false, 00:11:26.127 "abort": true, 00:11:26.127 "seek_hole": false, 00:11:26.127 "seek_data": false, 00:11:26.127 "copy": true, 00:11:26.127 "nvme_iov_md": false 00:11:26.127 }, 00:11:26.127 "memory_domains": [ 00:11:26.127 { 00:11:26.127 "dma_device_id": "system", 00:11:26.127 "dma_device_type": 1 00:11:26.127 }, 00:11:26.127 { 00:11:26.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.127 "dma_device_type": 2 00:11:26.127 } 00:11:26.127 ], 00:11:26.127 "driver_specific": {} 00:11:26.127 } 00:11:26.127 ] 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.127 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.128 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.128 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.128 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.128 "name": "Existed_Raid", 00:11:26.128 "uuid": "6179b534-a488-42b1-a465-ab3467139d8d", 00:11:26.128 "strip_size_kb": 64, 00:11:26.128 "state": "online", 00:11:26.128 "raid_level": "concat", 00:11:26.128 "superblock": false, 00:11:26.128 "num_base_bdevs": 3, 00:11:26.128 "num_base_bdevs_discovered": 3, 00:11:26.128 "num_base_bdevs_operational": 3, 00:11:26.128 "base_bdevs_list": [ 00:11:26.128 { 00:11:26.128 "name": "BaseBdev1", 00:11:26.128 "uuid": "92702ad1-1782-485e-9d0a-2806772019fb", 00:11:26.128 "is_configured": true, 00:11:26.128 "data_offset": 0, 00:11:26.128 "data_size": 65536 00:11:26.128 }, 00:11:26.128 { 00:11:26.128 "name": "BaseBdev2", 00:11:26.128 "uuid": "280b13a5-eb06-4308-85bd-6bab4da915dd", 00:11:26.128 "is_configured": true, 00:11:26.128 "data_offset": 0, 00:11:26.128 "data_size": 65536 00:11:26.128 }, 00:11:26.128 { 00:11:26.128 "name": "BaseBdev3", 00:11:26.128 "uuid": "c017d7cc-0925-43fd-969e-b773a10bdae7", 00:11:26.128 "is_configured": true, 00:11:26.128 "data_offset": 0, 00:11:26.128 "data_size": 65536 00:11:26.128 } 00:11:26.128 ] 00:11:26.128 }' 00:11:26.128 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.128 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.736 [2024-11-04 14:37:25.636891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.736 "name": "Existed_Raid", 00:11:26.736 "aliases": [ 00:11:26.736 "6179b534-a488-42b1-a465-ab3467139d8d" 00:11:26.736 ], 00:11:26.736 "product_name": "Raid Volume", 00:11:26.736 "block_size": 512, 00:11:26.736 "num_blocks": 196608, 00:11:26.736 "uuid": "6179b534-a488-42b1-a465-ab3467139d8d", 00:11:26.736 "assigned_rate_limits": { 00:11:26.736 "rw_ios_per_sec": 0, 00:11:26.736 "rw_mbytes_per_sec": 0, 00:11:26.736 "r_mbytes_per_sec": 0, 00:11:26.736 "w_mbytes_per_sec": 0 00:11:26.736 }, 00:11:26.736 "claimed": false, 00:11:26.736 "zoned": false, 00:11:26.736 "supported_io_types": { 00:11:26.736 "read": true, 00:11:26.736 "write": true, 00:11:26.736 "unmap": true, 00:11:26.736 "flush": true, 00:11:26.736 "reset": true, 00:11:26.736 "nvme_admin": false, 00:11:26.736 "nvme_io": false, 00:11:26.736 "nvme_io_md": false, 00:11:26.736 "write_zeroes": true, 00:11:26.736 "zcopy": false, 00:11:26.736 "get_zone_info": false, 00:11:26.736 "zone_management": false, 00:11:26.736 "zone_append": false, 00:11:26.736 "compare": false, 00:11:26.736 "compare_and_write": false, 00:11:26.736 "abort": false, 00:11:26.736 "seek_hole": false, 00:11:26.736 "seek_data": false, 00:11:26.736 "copy": false, 00:11:26.736 "nvme_iov_md": false 00:11:26.736 }, 00:11:26.736 "memory_domains": [ 00:11:26.736 { 00:11:26.736 "dma_device_id": "system", 00:11:26.736 "dma_device_type": 1 00:11:26.736 }, 00:11:26.736 { 00:11:26.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.736 "dma_device_type": 2 00:11:26.736 }, 00:11:26.736 { 00:11:26.736 "dma_device_id": "system", 00:11:26.736 "dma_device_type": 1 00:11:26.736 }, 00:11:26.736 { 00:11:26.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.736 "dma_device_type": 2 00:11:26.736 }, 00:11:26.736 { 00:11:26.736 "dma_device_id": "system", 00:11:26.736 "dma_device_type": 1 00:11:26.736 }, 00:11:26.736 { 00:11:26.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.736 "dma_device_type": 2 00:11:26.736 } 00:11:26.736 ], 00:11:26.736 "driver_specific": { 00:11:26.736 "raid": { 00:11:26.736 "uuid": "6179b534-a488-42b1-a465-ab3467139d8d", 00:11:26.736 "strip_size_kb": 64, 00:11:26.736 "state": "online", 00:11:26.736 "raid_level": "concat", 00:11:26.736 "superblock": false, 00:11:26.736 "num_base_bdevs": 3, 00:11:26.736 "num_base_bdevs_discovered": 3, 00:11:26.736 "num_base_bdevs_operational": 3, 00:11:26.736 "base_bdevs_list": [ 00:11:26.736 { 00:11:26.736 "name": "BaseBdev1", 00:11:26.736 "uuid": "92702ad1-1782-485e-9d0a-2806772019fb", 00:11:26.736 "is_configured": true, 00:11:26.736 "data_offset": 0, 00:11:26.736 "data_size": 65536 00:11:26.736 }, 00:11:26.736 { 00:11:26.736 "name": "BaseBdev2", 00:11:26.736 "uuid": "280b13a5-eb06-4308-85bd-6bab4da915dd", 00:11:26.736 "is_configured": true, 00:11:26.736 "data_offset": 0, 00:11:26.736 "data_size": 65536 00:11:26.736 }, 00:11:26.736 { 00:11:26.736 "name": "BaseBdev3", 00:11:26.736 "uuid": "c017d7cc-0925-43fd-969e-b773a10bdae7", 00:11:26.736 "is_configured": true, 00:11:26.736 "data_offset": 0, 00:11:26.736 "data_size": 65536 00:11:26.736 } 00:11:26.736 ] 00:11:26.736 } 00:11:26.736 } 00:11:26.736 }' 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:26.736 BaseBdev2 00:11:26.736 BaseBdev3' 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.736 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.737 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.996 14:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.996 [2024-11-04 14:37:25.952554] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:26.996 [2024-11-04 14:37:25.952618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.996 [2024-11-04 14:37:25.952737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.996 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.998 "name": "Existed_Raid", 00:11:26.998 "uuid": "6179b534-a488-42b1-a465-ab3467139d8d", 00:11:26.998 "strip_size_kb": 64, 00:11:26.998 "state": "offline", 00:11:26.998 "raid_level": "concat", 00:11:26.998 "superblock": false, 00:11:26.998 "num_base_bdevs": 3, 00:11:26.998 "num_base_bdevs_discovered": 2, 00:11:26.998 "num_base_bdevs_operational": 2, 00:11:26.998 "base_bdevs_list": [ 00:11:26.998 { 00:11:26.998 "name": null, 00:11:26.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.998 "is_configured": false, 00:11:26.998 "data_offset": 0, 00:11:26.998 "data_size": 65536 00:11:26.998 }, 00:11:26.998 { 00:11:26.998 "name": "BaseBdev2", 00:11:26.998 "uuid": "280b13a5-eb06-4308-85bd-6bab4da915dd", 00:11:26.998 "is_configured": true, 00:11:26.998 "data_offset": 0, 00:11:26.998 "data_size": 65536 00:11:26.998 }, 00:11:26.998 { 00:11:26.998 "name": "BaseBdev3", 00:11:26.998 "uuid": "c017d7cc-0925-43fd-969e-b773a10bdae7", 00:11:26.998 "is_configured": true, 00:11:26.998 "data_offset": 0, 00:11:26.998 "data_size": 65536 00:11:26.998 } 00:11:26.998 ] 00:11:26.998 }' 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.998 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.567 [2024-11-04 14:37:26.601749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.567 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.826 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.827 [2024-11-04 14:37:26.742800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.827 [2024-11-04 14:37:26.742867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.827 BaseBdev2 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.827 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.827 [ 00:11:27.827 { 00:11:27.827 "name": "BaseBdev2", 00:11:27.827 "aliases": [ 00:11:27.827 "451fbef9-dc83-4389-b2e9-acb82040b104" 00:11:27.827 ], 00:11:27.827 "product_name": "Malloc disk", 00:11:27.827 "block_size": 512, 00:11:27.827 "num_blocks": 65536, 00:11:27.827 "uuid": "451fbef9-dc83-4389-b2e9-acb82040b104", 00:11:27.827 "assigned_rate_limits": { 00:11:27.827 "rw_ios_per_sec": 0, 00:11:27.827 "rw_mbytes_per_sec": 0, 00:11:27.827 "r_mbytes_per_sec": 0, 00:11:27.827 "w_mbytes_per_sec": 0 00:11:28.086 }, 00:11:28.086 "claimed": false, 00:11:28.086 "zoned": false, 00:11:28.086 "supported_io_types": { 00:11:28.086 "read": true, 00:11:28.086 "write": true, 00:11:28.086 "unmap": true, 00:11:28.086 "flush": true, 00:11:28.086 "reset": true, 00:11:28.086 "nvme_admin": false, 00:11:28.086 "nvme_io": false, 00:11:28.087 "nvme_io_md": false, 00:11:28.087 "write_zeroes": true, 00:11:28.087 "zcopy": true, 00:11:28.087 "get_zone_info": false, 00:11:28.087 "zone_management": false, 00:11:28.087 "zone_append": false, 00:11:28.087 "compare": false, 00:11:28.087 "compare_and_write": false, 00:11:28.087 "abort": true, 00:11:28.087 "seek_hole": false, 00:11:28.087 "seek_data": false, 00:11:28.087 "copy": true, 00:11:28.087 "nvme_iov_md": false 00:11:28.087 }, 00:11:28.087 "memory_domains": [ 00:11:28.087 { 00:11:28.087 "dma_device_id": "system", 00:11:28.087 "dma_device_type": 1 00:11:28.087 }, 00:11:28.087 { 00:11:28.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.087 "dma_device_type": 2 00:11:28.087 } 00:11:28.087 ], 00:11:28.087 "driver_specific": {} 00:11:28.087 } 00:11:28.087 ] 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.087 BaseBdev3 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:28.087 14:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.087 [ 00:11:28.087 { 00:11:28.087 "name": "BaseBdev3", 00:11:28.087 "aliases": [ 00:11:28.087 "05263154-e3c5-457d-93d2-0d634910b40b" 00:11:28.087 ], 00:11:28.087 "product_name": "Malloc disk", 00:11:28.087 "block_size": 512, 00:11:28.087 "num_blocks": 65536, 00:11:28.087 "uuid": "05263154-e3c5-457d-93d2-0d634910b40b", 00:11:28.087 "assigned_rate_limits": { 00:11:28.087 "rw_ios_per_sec": 0, 00:11:28.087 "rw_mbytes_per_sec": 0, 00:11:28.087 "r_mbytes_per_sec": 0, 00:11:28.087 "w_mbytes_per_sec": 0 00:11:28.087 }, 00:11:28.087 "claimed": false, 00:11:28.087 "zoned": false, 00:11:28.087 "supported_io_types": { 00:11:28.087 "read": true, 00:11:28.087 "write": true, 00:11:28.087 "unmap": true, 00:11:28.087 "flush": true, 00:11:28.087 "reset": true, 00:11:28.087 "nvme_admin": false, 00:11:28.087 "nvme_io": false, 00:11:28.087 "nvme_io_md": false, 00:11:28.087 "write_zeroes": true, 00:11:28.087 "zcopy": true, 00:11:28.087 "get_zone_info": false, 00:11:28.087 "zone_management": false, 00:11:28.087 "zone_append": false, 00:11:28.087 "compare": false, 00:11:28.087 "compare_and_write": false, 00:11:28.087 "abort": true, 00:11:28.087 "seek_hole": false, 00:11:28.087 "seek_data": false, 00:11:28.087 "copy": true, 00:11:28.087 "nvme_iov_md": false 00:11:28.087 }, 00:11:28.087 "memory_domains": [ 00:11:28.087 { 00:11:28.087 "dma_device_id": "system", 00:11:28.087 "dma_device_type": 1 00:11:28.087 }, 00:11:28.087 { 00:11:28.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.087 "dma_device_type": 2 00:11:28.087 } 00:11:28.087 ], 00:11:28.087 "driver_specific": {} 00:11:28.087 } 00:11:28.087 ] 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.087 [2024-11-04 14:37:27.036519] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.087 [2024-11-04 14:37:27.036589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.087 [2024-11-04 14:37:27.036649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.087 [2024-11-04 14:37:27.039082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.087 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.087 "name": "Existed_Raid", 00:11:28.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.087 "strip_size_kb": 64, 00:11:28.087 "state": "configuring", 00:11:28.087 "raid_level": "concat", 00:11:28.087 "superblock": false, 00:11:28.087 "num_base_bdevs": 3, 00:11:28.087 "num_base_bdevs_discovered": 2, 00:11:28.087 "num_base_bdevs_operational": 3, 00:11:28.087 "base_bdevs_list": [ 00:11:28.087 { 00:11:28.087 "name": "BaseBdev1", 00:11:28.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.087 "is_configured": false, 00:11:28.087 "data_offset": 0, 00:11:28.087 "data_size": 0 00:11:28.087 }, 00:11:28.087 { 00:11:28.087 "name": "BaseBdev2", 00:11:28.087 "uuid": "451fbef9-dc83-4389-b2e9-acb82040b104", 00:11:28.087 "is_configured": true, 00:11:28.088 "data_offset": 0, 00:11:28.088 "data_size": 65536 00:11:28.088 }, 00:11:28.088 { 00:11:28.088 "name": "BaseBdev3", 00:11:28.088 "uuid": "05263154-e3c5-457d-93d2-0d634910b40b", 00:11:28.088 "is_configured": true, 00:11:28.088 "data_offset": 0, 00:11:28.088 "data_size": 65536 00:11:28.088 } 00:11:28.088 ] 00:11:28.088 }' 00:11:28.088 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.088 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.705 [2024-11-04 14:37:27.548744] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.705 "name": "Existed_Raid", 00:11:28.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.705 "strip_size_kb": 64, 00:11:28.705 "state": "configuring", 00:11:28.705 "raid_level": "concat", 00:11:28.705 "superblock": false, 00:11:28.705 "num_base_bdevs": 3, 00:11:28.705 "num_base_bdevs_discovered": 1, 00:11:28.705 "num_base_bdevs_operational": 3, 00:11:28.705 "base_bdevs_list": [ 00:11:28.705 { 00:11:28.705 "name": "BaseBdev1", 00:11:28.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.705 "is_configured": false, 00:11:28.705 "data_offset": 0, 00:11:28.705 "data_size": 0 00:11:28.705 }, 00:11:28.705 { 00:11:28.705 "name": null, 00:11:28.705 "uuid": "451fbef9-dc83-4389-b2e9-acb82040b104", 00:11:28.705 "is_configured": false, 00:11:28.705 "data_offset": 0, 00:11:28.705 "data_size": 65536 00:11:28.705 }, 00:11:28.705 { 00:11:28.705 "name": "BaseBdev3", 00:11:28.705 "uuid": "05263154-e3c5-457d-93d2-0d634910b40b", 00:11:28.705 "is_configured": true, 00:11:28.705 "data_offset": 0, 00:11:28.705 "data_size": 65536 00:11:28.705 } 00:11:28.705 ] 00:11:28.705 }' 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.705 14:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.273 [2024-11-04 14:37:28.171174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.273 BaseBdev1 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.273 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.273 [ 00:11:29.273 { 00:11:29.273 "name": "BaseBdev1", 00:11:29.273 "aliases": [ 00:11:29.273 "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20" 00:11:29.273 ], 00:11:29.273 "product_name": "Malloc disk", 00:11:29.273 "block_size": 512, 00:11:29.273 "num_blocks": 65536, 00:11:29.273 "uuid": "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20", 00:11:29.273 "assigned_rate_limits": { 00:11:29.273 "rw_ios_per_sec": 0, 00:11:29.273 "rw_mbytes_per_sec": 0, 00:11:29.273 "r_mbytes_per_sec": 0, 00:11:29.273 "w_mbytes_per_sec": 0 00:11:29.273 }, 00:11:29.273 "claimed": true, 00:11:29.273 "claim_type": "exclusive_write", 00:11:29.273 "zoned": false, 00:11:29.273 "supported_io_types": { 00:11:29.273 "read": true, 00:11:29.273 "write": true, 00:11:29.273 "unmap": true, 00:11:29.273 "flush": true, 00:11:29.273 "reset": true, 00:11:29.273 "nvme_admin": false, 00:11:29.273 "nvme_io": false, 00:11:29.273 "nvme_io_md": false, 00:11:29.273 "write_zeroes": true, 00:11:29.273 "zcopy": true, 00:11:29.273 "get_zone_info": false, 00:11:29.273 "zone_management": false, 00:11:29.273 "zone_append": false, 00:11:29.273 "compare": false, 00:11:29.273 "compare_and_write": false, 00:11:29.273 "abort": true, 00:11:29.273 "seek_hole": false, 00:11:29.273 "seek_data": false, 00:11:29.273 "copy": true, 00:11:29.273 "nvme_iov_md": false 00:11:29.273 }, 00:11:29.273 "memory_domains": [ 00:11:29.273 { 00:11:29.273 "dma_device_id": "system", 00:11:29.273 "dma_device_type": 1 00:11:29.273 }, 00:11:29.273 { 00:11:29.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.273 "dma_device_type": 2 00:11:29.273 } 00:11:29.273 ], 00:11:29.273 "driver_specific": {} 00:11:29.273 } 00:11:29.273 ] 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.274 "name": "Existed_Raid", 00:11:29.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.274 "strip_size_kb": 64, 00:11:29.274 "state": "configuring", 00:11:29.274 "raid_level": "concat", 00:11:29.274 "superblock": false, 00:11:29.274 "num_base_bdevs": 3, 00:11:29.274 "num_base_bdevs_discovered": 2, 00:11:29.274 "num_base_bdevs_operational": 3, 00:11:29.274 "base_bdevs_list": [ 00:11:29.274 { 00:11:29.274 "name": "BaseBdev1", 00:11:29.274 "uuid": "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20", 00:11:29.274 "is_configured": true, 00:11:29.274 "data_offset": 0, 00:11:29.274 "data_size": 65536 00:11:29.274 }, 00:11:29.274 { 00:11:29.274 "name": null, 00:11:29.274 "uuid": "451fbef9-dc83-4389-b2e9-acb82040b104", 00:11:29.274 "is_configured": false, 00:11:29.274 "data_offset": 0, 00:11:29.274 "data_size": 65536 00:11:29.274 }, 00:11:29.274 { 00:11:29.274 "name": "BaseBdev3", 00:11:29.274 "uuid": "05263154-e3c5-457d-93d2-0d634910b40b", 00:11:29.274 "is_configured": true, 00:11:29.274 "data_offset": 0, 00:11:29.274 "data_size": 65536 00:11:29.274 } 00:11:29.274 ] 00:11:29.274 }' 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.274 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.841 [2024-11-04 14:37:28.759420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.841 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.841 "name": "Existed_Raid", 00:11:29.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.841 "strip_size_kb": 64, 00:11:29.841 "state": "configuring", 00:11:29.841 "raid_level": "concat", 00:11:29.841 "superblock": false, 00:11:29.841 "num_base_bdevs": 3, 00:11:29.841 "num_base_bdevs_discovered": 1, 00:11:29.841 "num_base_bdevs_operational": 3, 00:11:29.841 "base_bdevs_list": [ 00:11:29.841 { 00:11:29.841 "name": "BaseBdev1", 00:11:29.841 "uuid": "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20", 00:11:29.841 "is_configured": true, 00:11:29.841 "data_offset": 0, 00:11:29.842 "data_size": 65536 00:11:29.842 }, 00:11:29.842 { 00:11:29.842 "name": null, 00:11:29.842 "uuid": "451fbef9-dc83-4389-b2e9-acb82040b104", 00:11:29.842 "is_configured": false, 00:11:29.842 "data_offset": 0, 00:11:29.842 "data_size": 65536 00:11:29.842 }, 00:11:29.842 { 00:11:29.842 "name": null, 00:11:29.842 "uuid": "05263154-e3c5-457d-93d2-0d634910b40b", 00:11:29.842 "is_configured": false, 00:11:29.842 "data_offset": 0, 00:11:29.842 "data_size": 65536 00:11:29.842 } 00:11:29.842 ] 00:11:29.842 }' 00:11:29.842 14:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.842 14:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.409 [2024-11-04 14:37:29.339584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.409 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.410 "name": "Existed_Raid", 00:11:30.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.410 "strip_size_kb": 64, 00:11:30.410 "state": "configuring", 00:11:30.410 "raid_level": "concat", 00:11:30.410 "superblock": false, 00:11:30.410 "num_base_bdevs": 3, 00:11:30.410 "num_base_bdevs_discovered": 2, 00:11:30.410 "num_base_bdevs_operational": 3, 00:11:30.410 "base_bdevs_list": [ 00:11:30.410 { 00:11:30.410 "name": "BaseBdev1", 00:11:30.410 "uuid": "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20", 00:11:30.410 "is_configured": true, 00:11:30.410 "data_offset": 0, 00:11:30.410 "data_size": 65536 00:11:30.410 }, 00:11:30.410 { 00:11:30.410 "name": null, 00:11:30.410 "uuid": "451fbef9-dc83-4389-b2e9-acb82040b104", 00:11:30.410 "is_configured": false, 00:11:30.410 "data_offset": 0, 00:11:30.410 "data_size": 65536 00:11:30.410 }, 00:11:30.410 { 00:11:30.410 "name": "BaseBdev3", 00:11:30.410 "uuid": "05263154-e3c5-457d-93d2-0d634910b40b", 00:11:30.410 "is_configured": true, 00:11:30.410 "data_offset": 0, 00:11:30.410 "data_size": 65536 00:11:30.410 } 00:11:30.410 ] 00:11:30.410 }' 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.410 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.977 [2024-11-04 14:37:29.911789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.977 14:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.977 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.977 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.977 "name": "Existed_Raid", 00:11:30.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.977 "strip_size_kb": 64, 00:11:30.977 "state": "configuring", 00:11:30.977 "raid_level": "concat", 00:11:30.977 "superblock": false, 00:11:30.977 "num_base_bdevs": 3, 00:11:30.977 "num_base_bdevs_discovered": 1, 00:11:30.977 "num_base_bdevs_operational": 3, 00:11:30.977 "base_bdevs_list": [ 00:11:30.977 { 00:11:30.977 "name": null, 00:11:30.977 "uuid": "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20", 00:11:30.977 "is_configured": false, 00:11:30.977 "data_offset": 0, 00:11:30.977 "data_size": 65536 00:11:30.977 }, 00:11:30.977 { 00:11:30.977 "name": null, 00:11:30.977 "uuid": "451fbef9-dc83-4389-b2e9-acb82040b104", 00:11:30.977 "is_configured": false, 00:11:30.977 "data_offset": 0, 00:11:30.977 "data_size": 65536 00:11:30.977 }, 00:11:30.977 { 00:11:30.977 "name": "BaseBdev3", 00:11:30.977 "uuid": "05263154-e3c5-457d-93d2-0d634910b40b", 00:11:30.977 "is_configured": true, 00:11:30.977 "data_offset": 0, 00:11:30.977 "data_size": 65536 00:11:30.977 } 00:11:30.977 ] 00:11:30.977 }' 00:11:30.977 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.977 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.544 [2024-11-04 14:37:30.576074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.544 "name": "Existed_Raid", 00:11:31.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.544 "strip_size_kb": 64, 00:11:31.544 "state": "configuring", 00:11:31.544 "raid_level": "concat", 00:11:31.544 "superblock": false, 00:11:31.544 "num_base_bdevs": 3, 00:11:31.544 "num_base_bdevs_discovered": 2, 00:11:31.544 "num_base_bdevs_operational": 3, 00:11:31.544 "base_bdevs_list": [ 00:11:31.544 { 00:11:31.544 "name": null, 00:11:31.544 "uuid": "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20", 00:11:31.544 "is_configured": false, 00:11:31.544 "data_offset": 0, 00:11:31.544 "data_size": 65536 00:11:31.544 }, 00:11:31.544 { 00:11:31.544 "name": "BaseBdev2", 00:11:31.544 "uuid": "451fbef9-dc83-4389-b2e9-acb82040b104", 00:11:31.544 "is_configured": true, 00:11:31.544 "data_offset": 0, 00:11:31.544 "data_size": 65536 00:11:31.544 }, 00:11:31.544 { 00:11:31.544 "name": "BaseBdev3", 00:11:31.544 "uuid": "05263154-e3c5-457d-93d2-0d634910b40b", 00:11:31.544 "is_configured": true, 00:11:31.544 "data_offset": 0, 00:11:31.544 "data_size": 65536 00:11:31.544 } 00:11:31.544 ] 00:11:31.544 }' 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.544 14:37:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.111 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.111 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.111 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:32.111 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.111 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.111 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:32.111 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:32.111 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.111 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.112 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.112 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.112 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20 00:11:32.112 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.112 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.371 [2024-11-04 14:37:31.255155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:32.371 [2024-11-04 14:37:31.255198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:32.371 [2024-11-04 14:37:31.255213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:32.371 [2024-11-04 14:37:31.255578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:32.371 [2024-11-04 14:37:31.255761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:32.371 [2024-11-04 14:37:31.255793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:32.371 [2024-11-04 14:37:31.256093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.371 NewBaseBdev 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.371 [ 00:11:32.371 { 00:11:32.371 "name": "NewBaseBdev", 00:11:32.371 "aliases": [ 00:11:32.371 "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20" 00:11:32.371 ], 00:11:32.371 "product_name": "Malloc disk", 00:11:32.371 "block_size": 512, 00:11:32.371 "num_blocks": 65536, 00:11:32.371 "uuid": "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20", 00:11:32.371 "assigned_rate_limits": { 00:11:32.371 "rw_ios_per_sec": 0, 00:11:32.371 "rw_mbytes_per_sec": 0, 00:11:32.371 "r_mbytes_per_sec": 0, 00:11:32.371 "w_mbytes_per_sec": 0 00:11:32.371 }, 00:11:32.371 "claimed": true, 00:11:32.371 "claim_type": "exclusive_write", 00:11:32.371 "zoned": false, 00:11:32.371 "supported_io_types": { 00:11:32.371 "read": true, 00:11:32.371 "write": true, 00:11:32.371 "unmap": true, 00:11:32.371 "flush": true, 00:11:32.371 "reset": true, 00:11:32.371 "nvme_admin": false, 00:11:32.371 "nvme_io": false, 00:11:32.371 "nvme_io_md": false, 00:11:32.371 "write_zeroes": true, 00:11:32.371 "zcopy": true, 00:11:32.371 "get_zone_info": false, 00:11:32.371 "zone_management": false, 00:11:32.371 "zone_append": false, 00:11:32.371 "compare": false, 00:11:32.371 "compare_and_write": false, 00:11:32.371 "abort": true, 00:11:32.371 "seek_hole": false, 00:11:32.371 "seek_data": false, 00:11:32.371 "copy": true, 00:11:32.371 "nvme_iov_md": false 00:11:32.371 }, 00:11:32.371 "memory_domains": [ 00:11:32.371 { 00:11:32.371 "dma_device_id": "system", 00:11:32.371 "dma_device_type": 1 00:11:32.371 }, 00:11:32.371 { 00:11:32.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.371 "dma_device_type": 2 00:11:32.371 } 00:11:32.371 ], 00:11:32.371 "driver_specific": {} 00:11:32.371 } 00:11:32.371 ] 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.371 "name": "Existed_Raid", 00:11:32.371 "uuid": "e3bfde6b-cec3-4ee5-ab81-f1eb46da58dd", 00:11:32.371 "strip_size_kb": 64, 00:11:32.371 "state": "online", 00:11:32.371 "raid_level": "concat", 00:11:32.371 "superblock": false, 00:11:32.371 "num_base_bdevs": 3, 00:11:32.371 "num_base_bdevs_discovered": 3, 00:11:32.371 "num_base_bdevs_operational": 3, 00:11:32.371 "base_bdevs_list": [ 00:11:32.371 { 00:11:32.371 "name": "NewBaseBdev", 00:11:32.371 "uuid": "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20", 00:11:32.371 "is_configured": true, 00:11:32.371 "data_offset": 0, 00:11:32.371 "data_size": 65536 00:11:32.371 }, 00:11:32.371 { 00:11:32.371 "name": "BaseBdev2", 00:11:32.371 "uuid": "451fbef9-dc83-4389-b2e9-acb82040b104", 00:11:32.371 "is_configured": true, 00:11:32.371 "data_offset": 0, 00:11:32.371 "data_size": 65536 00:11:32.371 }, 00:11:32.371 { 00:11:32.371 "name": "BaseBdev3", 00:11:32.371 "uuid": "05263154-e3c5-457d-93d2-0d634910b40b", 00:11:32.371 "is_configured": true, 00:11:32.371 "data_offset": 0, 00:11:32.371 "data_size": 65536 00:11:32.371 } 00:11:32.371 ] 00:11:32.371 }' 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.371 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.939 [2024-11-04 14:37:31.847737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:32.939 "name": "Existed_Raid", 00:11:32.939 "aliases": [ 00:11:32.939 "e3bfde6b-cec3-4ee5-ab81-f1eb46da58dd" 00:11:32.939 ], 00:11:32.939 "product_name": "Raid Volume", 00:11:32.939 "block_size": 512, 00:11:32.939 "num_blocks": 196608, 00:11:32.939 "uuid": "e3bfde6b-cec3-4ee5-ab81-f1eb46da58dd", 00:11:32.939 "assigned_rate_limits": { 00:11:32.939 "rw_ios_per_sec": 0, 00:11:32.939 "rw_mbytes_per_sec": 0, 00:11:32.939 "r_mbytes_per_sec": 0, 00:11:32.939 "w_mbytes_per_sec": 0 00:11:32.939 }, 00:11:32.939 "claimed": false, 00:11:32.939 "zoned": false, 00:11:32.939 "supported_io_types": { 00:11:32.939 "read": true, 00:11:32.939 "write": true, 00:11:32.939 "unmap": true, 00:11:32.939 "flush": true, 00:11:32.939 "reset": true, 00:11:32.939 "nvme_admin": false, 00:11:32.939 "nvme_io": false, 00:11:32.939 "nvme_io_md": false, 00:11:32.939 "write_zeroes": true, 00:11:32.939 "zcopy": false, 00:11:32.939 "get_zone_info": false, 00:11:32.939 "zone_management": false, 00:11:32.939 "zone_append": false, 00:11:32.939 "compare": false, 00:11:32.939 "compare_and_write": false, 00:11:32.939 "abort": false, 00:11:32.939 "seek_hole": false, 00:11:32.939 "seek_data": false, 00:11:32.939 "copy": false, 00:11:32.939 "nvme_iov_md": false 00:11:32.939 }, 00:11:32.939 "memory_domains": [ 00:11:32.939 { 00:11:32.939 "dma_device_id": "system", 00:11:32.939 "dma_device_type": 1 00:11:32.939 }, 00:11:32.939 { 00:11:32.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.939 "dma_device_type": 2 00:11:32.939 }, 00:11:32.939 { 00:11:32.939 "dma_device_id": "system", 00:11:32.939 "dma_device_type": 1 00:11:32.939 }, 00:11:32.939 { 00:11:32.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.939 "dma_device_type": 2 00:11:32.939 }, 00:11:32.939 { 00:11:32.939 "dma_device_id": "system", 00:11:32.939 "dma_device_type": 1 00:11:32.939 }, 00:11:32.939 { 00:11:32.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.939 "dma_device_type": 2 00:11:32.939 } 00:11:32.939 ], 00:11:32.939 "driver_specific": { 00:11:32.939 "raid": { 00:11:32.939 "uuid": "e3bfde6b-cec3-4ee5-ab81-f1eb46da58dd", 00:11:32.939 "strip_size_kb": 64, 00:11:32.939 "state": "online", 00:11:32.939 "raid_level": "concat", 00:11:32.939 "superblock": false, 00:11:32.939 "num_base_bdevs": 3, 00:11:32.939 "num_base_bdevs_discovered": 3, 00:11:32.939 "num_base_bdevs_operational": 3, 00:11:32.939 "base_bdevs_list": [ 00:11:32.939 { 00:11:32.939 "name": "NewBaseBdev", 00:11:32.939 "uuid": "76ccd3c2-3d72-4ca4-9df0-bcdf2318fa20", 00:11:32.939 "is_configured": true, 00:11:32.939 "data_offset": 0, 00:11:32.939 "data_size": 65536 00:11:32.939 }, 00:11:32.939 { 00:11:32.939 "name": "BaseBdev2", 00:11:32.939 "uuid": "451fbef9-dc83-4389-b2e9-acb82040b104", 00:11:32.939 "is_configured": true, 00:11:32.939 "data_offset": 0, 00:11:32.939 "data_size": 65536 00:11:32.939 }, 00:11:32.939 { 00:11:32.939 "name": "BaseBdev3", 00:11:32.939 "uuid": "05263154-e3c5-457d-93d2-0d634910b40b", 00:11:32.939 "is_configured": true, 00:11:32.939 "data_offset": 0, 00:11:32.939 "data_size": 65536 00:11:32.939 } 00:11:32.939 ] 00:11:32.939 } 00:11:32.939 } 00:11:32.939 }' 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:32.939 BaseBdev2 00:11:32.939 BaseBdev3' 00:11:32.939 14:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.939 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:32.939 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:32.939 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:32.939 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.939 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:32.939 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.939 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.197 [2024-11-04 14:37:32.183526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:33.197 [2024-11-04 14:37:32.183712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.197 [2024-11-04 14:37:32.183819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.197 [2024-11-04 14:37:32.183891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.197 [2024-11-04 14:37:32.183911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65623 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65623 ']' 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65623 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65623 00:11:33.197 killing process with pid 65623 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65623' 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65623 00:11:33.197 [2024-11-04 14:37:32.223362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.197 14:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65623 00:11:33.455 [2024-11-04 14:37:32.491059] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.390 14:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:34.390 00:11:34.390 real 0m11.815s 00:11:34.390 user 0m19.709s 00:11:34.390 sys 0m1.618s 00:11:34.390 14:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:34.390 ************************************ 00:11:34.390 END TEST raid_state_function_test 00:11:34.390 14:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.390 ************************************ 00:11:34.649 14:37:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:34.649 14:37:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:34.649 14:37:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:34.649 14:37:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:34.649 ************************************ 00:11:34.649 START TEST raid_state_function_test_sb 00:11:34.649 ************************************ 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:34.649 Process raid pid: 66260 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66260 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66260' 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66260 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66260 ']' 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:34.649 14:37:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.649 [2024-11-04 14:37:33.654362] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:11:34.649 [2024-11-04 14:37:33.654578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.907 [2024-11-04 14:37:33.838197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.907 [2024-11-04 14:37:33.964636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.165 [2024-11-04 14:37:34.166975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.165 [2024-11-04 14:37:34.167042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.731 [2024-11-04 14:37:34.658209] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.731 [2024-11-04 14:37:34.658559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.731 [2024-11-04 14:37:34.658588] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.731 [2024-11-04 14:37:34.658607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.731 [2024-11-04 14:37:34.658617] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.731 [2024-11-04 14:37:34.658631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.731 "name": "Existed_Raid", 00:11:35.731 "uuid": "e246e56d-e32c-4f39-97ff-ab4df57ae355", 00:11:35.731 "strip_size_kb": 64, 00:11:35.731 "state": "configuring", 00:11:35.731 "raid_level": "concat", 00:11:35.731 "superblock": true, 00:11:35.731 "num_base_bdevs": 3, 00:11:35.731 "num_base_bdevs_discovered": 0, 00:11:35.731 "num_base_bdevs_operational": 3, 00:11:35.731 "base_bdevs_list": [ 00:11:35.731 { 00:11:35.731 "name": "BaseBdev1", 00:11:35.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.731 "is_configured": false, 00:11:35.731 "data_offset": 0, 00:11:35.731 "data_size": 0 00:11:35.731 }, 00:11:35.731 { 00:11:35.731 "name": "BaseBdev2", 00:11:35.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.731 "is_configured": false, 00:11:35.731 "data_offset": 0, 00:11:35.731 "data_size": 0 00:11:35.731 }, 00:11:35.731 { 00:11:35.731 "name": "BaseBdev3", 00:11:35.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.731 "is_configured": false, 00:11:35.731 "data_offset": 0, 00:11:35.731 "data_size": 0 00:11:35.731 } 00:11:35.731 ] 00:11:35.731 }' 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.731 14:37:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.299 [2024-11-04 14:37:35.178275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.299 [2024-11-04 14:37:35.178531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.299 [2024-11-04 14:37:35.186265] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.299 [2024-11-04 14:37:35.186350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.299 [2024-11-04 14:37:35.186366] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.299 [2024-11-04 14:37:35.186381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.299 [2024-11-04 14:37:35.186390] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.299 [2024-11-04 14:37:35.186402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.299 [2024-11-04 14:37:35.229448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.299 BaseBdev1 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.299 [ 00:11:36.299 { 00:11:36.299 "name": "BaseBdev1", 00:11:36.299 "aliases": [ 00:11:36.299 "0cebc8da-466b-4b4a-a454-9473bd58fee9" 00:11:36.299 ], 00:11:36.299 "product_name": "Malloc disk", 00:11:36.299 "block_size": 512, 00:11:36.299 "num_blocks": 65536, 00:11:36.299 "uuid": "0cebc8da-466b-4b4a-a454-9473bd58fee9", 00:11:36.299 "assigned_rate_limits": { 00:11:36.299 "rw_ios_per_sec": 0, 00:11:36.299 "rw_mbytes_per_sec": 0, 00:11:36.299 "r_mbytes_per_sec": 0, 00:11:36.299 "w_mbytes_per_sec": 0 00:11:36.299 }, 00:11:36.299 "claimed": true, 00:11:36.299 "claim_type": "exclusive_write", 00:11:36.299 "zoned": false, 00:11:36.299 "supported_io_types": { 00:11:36.299 "read": true, 00:11:36.299 "write": true, 00:11:36.299 "unmap": true, 00:11:36.299 "flush": true, 00:11:36.299 "reset": true, 00:11:36.299 "nvme_admin": false, 00:11:36.299 "nvme_io": false, 00:11:36.299 "nvme_io_md": false, 00:11:36.299 "write_zeroes": true, 00:11:36.299 "zcopy": true, 00:11:36.299 "get_zone_info": false, 00:11:36.299 "zone_management": false, 00:11:36.299 "zone_append": false, 00:11:36.299 "compare": false, 00:11:36.299 "compare_and_write": false, 00:11:36.299 "abort": true, 00:11:36.299 "seek_hole": false, 00:11:36.299 "seek_data": false, 00:11:36.299 "copy": true, 00:11:36.299 "nvme_iov_md": false 00:11:36.299 }, 00:11:36.299 "memory_domains": [ 00:11:36.299 { 00:11:36.299 "dma_device_id": "system", 00:11:36.299 "dma_device_type": 1 00:11:36.299 }, 00:11:36.299 { 00:11:36.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.299 "dma_device_type": 2 00:11:36.299 } 00:11:36.299 ], 00:11:36.299 "driver_specific": {} 00:11:36.299 } 00:11:36.299 ] 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.299 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.300 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.300 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.300 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.300 "name": "Existed_Raid", 00:11:36.300 "uuid": "a6624b87-1127-466d-b560-1778eb712328", 00:11:36.300 "strip_size_kb": 64, 00:11:36.300 "state": "configuring", 00:11:36.300 "raid_level": "concat", 00:11:36.300 "superblock": true, 00:11:36.300 "num_base_bdevs": 3, 00:11:36.300 "num_base_bdevs_discovered": 1, 00:11:36.300 "num_base_bdevs_operational": 3, 00:11:36.300 "base_bdevs_list": [ 00:11:36.300 { 00:11:36.300 "name": "BaseBdev1", 00:11:36.300 "uuid": "0cebc8da-466b-4b4a-a454-9473bd58fee9", 00:11:36.300 "is_configured": true, 00:11:36.300 "data_offset": 2048, 00:11:36.300 "data_size": 63488 00:11:36.300 }, 00:11:36.300 { 00:11:36.300 "name": "BaseBdev2", 00:11:36.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.300 "is_configured": false, 00:11:36.300 "data_offset": 0, 00:11:36.300 "data_size": 0 00:11:36.300 }, 00:11:36.300 { 00:11:36.300 "name": "BaseBdev3", 00:11:36.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.300 "is_configured": false, 00:11:36.300 "data_offset": 0, 00:11:36.300 "data_size": 0 00:11:36.300 } 00:11:36.300 ] 00:11:36.300 }' 00:11:36.300 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.300 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.867 [2024-11-04 14:37:35.777709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.867 [2024-11-04 14:37:35.777769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.867 [2024-11-04 14:37:35.789768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.867 [2024-11-04 14:37:35.792471] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.867 [2024-11-04 14:37:35.792680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.867 [2024-11-04 14:37:35.792805] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.867 [2024-11-04 14:37:35.792863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.867 "name": "Existed_Raid", 00:11:36.867 "uuid": "29671cbd-03d5-47dc-9acb-5e61128da4a7", 00:11:36.867 "strip_size_kb": 64, 00:11:36.867 "state": "configuring", 00:11:36.867 "raid_level": "concat", 00:11:36.867 "superblock": true, 00:11:36.867 "num_base_bdevs": 3, 00:11:36.867 "num_base_bdevs_discovered": 1, 00:11:36.867 "num_base_bdevs_operational": 3, 00:11:36.867 "base_bdevs_list": [ 00:11:36.867 { 00:11:36.867 "name": "BaseBdev1", 00:11:36.867 "uuid": "0cebc8da-466b-4b4a-a454-9473bd58fee9", 00:11:36.867 "is_configured": true, 00:11:36.867 "data_offset": 2048, 00:11:36.867 "data_size": 63488 00:11:36.867 }, 00:11:36.867 { 00:11:36.867 "name": "BaseBdev2", 00:11:36.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.867 "is_configured": false, 00:11:36.867 "data_offset": 0, 00:11:36.867 "data_size": 0 00:11:36.867 }, 00:11:36.867 { 00:11:36.867 "name": "BaseBdev3", 00:11:36.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.867 "is_configured": false, 00:11:36.867 "data_offset": 0, 00:11:36.867 "data_size": 0 00:11:36.867 } 00:11:36.867 ] 00:11:36.867 }' 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.867 14:37:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.435 [2024-11-04 14:37:36.355722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.435 BaseBdev2 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.435 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.435 [ 00:11:37.435 { 00:11:37.435 "name": "BaseBdev2", 00:11:37.435 "aliases": [ 00:11:37.435 "5c901908-222a-47fd-80a9-098552e9e66b" 00:11:37.435 ], 00:11:37.435 "product_name": "Malloc disk", 00:11:37.435 "block_size": 512, 00:11:37.435 "num_blocks": 65536, 00:11:37.435 "uuid": "5c901908-222a-47fd-80a9-098552e9e66b", 00:11:37.435 "assigned_rate_limits": { 00:11:37.435 "rw_ios_per_sec": 0, 00:11:37.435 "rw_mbytes_per_sec": 0, 00:11:37.435 "r_mbytes_per_sec": 0, 00:11:37.435 "w_mbytes_per_sec": 0 00:11:37.435 }, 00:11:37.435 "claimed": true, 00:11:37.435 "claim_type": "exclusive_write", 00:11:37.435 "zoned": false, 00:11:37.435 "supported_io_types": { 00:11:37.435 "read": true, 00:11:37.435 "write": true, 00:11:37.435 "unmap": true, 00:11:37.435 "flush": true, 00:11:37.435 "reset": true, 00:11:37.435 "nvme_admin": false, 00:11:37.435 "nvme_io": false, 00:11:37.435 "nvme_io_md": false, 00:11:37.435 "write_zeroes": true, 00:11:37.436 "zcopy": true, 00:11:37.436 "get_zone_info": false, 00:11:37.436 "zone_management": false, 00:11:37.436 "zone_append": false, 00:11:37.436 "compare": false, 00:11:37.436 "compare_and_write": false, 00:11:37.436 "abort": true, 00:11:37.436 "seek_hole": false, 00:11:37.436 "seek_data": false, 00:11:37.436 "copy": true, 00:11:37.436 "nvme_iov_md": false 00:11:37.436 }, 00:11:37.436 "memory_domains": [ 00:11:37.436 { 00:11:37.436 "dma_device_id": "system", 00:11:37.436 "dma_device_type": 1 00:11:37.436 }, 00:11:37.436 { 00:11:37.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.436 "dma_device_type": 2 00:11:37.436 } 00:11:37.436 ], 00:11:37.436 "driver_specific": {} 00:11:37.436 } 00:11:37.436 ] 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.436 "name": "Existed_Raid", 00:11:37.436 "uuid": "29671cbd-03d5-47dc-9acb-5e61128da4a7", 00:11:37.436 "strip_size_kb": 64, 00:11:37.436 "state": "configuring", 00:11:37.436 "raid_level": "concat", 00:11:37.436 "superblock": true, 00:11:37.436 "num_base_bdevs": 3, 00:11:37.436 "num_base_bdevs_discovered": 2, 00:11:37.436 "num_base_bdevs_operational": 3, 00:11:37.436 "base_bdevs_list": [ 00:11:37.436 { 00:11:37.436 "name": "BaseBdev1", 00:11:37.436 "uuid": "0cebc8da-466b-4b4a-a454-9473bd58fee9", 00:11:37.436 "is_configured": true, 00:11:37.436 "data_offset": 2048, 00:11:37.436 "data_size": 63488 00:11:37.436 }, 00:11:37.436 { 00:11:37.436 "name": "BaseBdev2", 00:11:37.436 "uuid": "5c901908-222a-47fd-80a9-098552e9e66b", 00:11:37.436 "is_configured": true, 00:11:37.436 "data_offset": 2048, 00:11:37.436 "data_size": 63488 00:11:37.436 }, 00:11:37.436 { 00:11:37.436 "name": "BaseBdev3", 00:11:37.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.436 "is_configured": false, 00:11:37.436 "data_offset": 0, 00:11:37.436 "data_size": 0 00:11:37.436 } 00:11:37.436 ] 00:11:37.436 }' 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.436 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.004 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:38.004 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.004 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.004 [2024-11-04 14:37:36.931521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.004 [2024-11-04 14:37:36.931827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:38.004 [2024-11-04 14:37:36.931860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:38.004 BaseBdev3 00:11:38.004 [2024-11-04 14:37:36.932261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:38.005 [2024-11-04 14:37:36.932464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:38.005 [2024-11-04 14:37:36.932482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:38.005 [2024-11-04 14:37:36.932667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.005 [ 00:11:38.005 { 00:11:38.005 "name": "BaseBdev3", 00:11:38.005 "aliases": [ 00:11:38.005 "6346dee6-1987-4678-89d7-001ef0b09fac" 00:11:38.005 ], 00:11:38.005 "product_name": "Malloc disk", 00:11:38.005 "block_size": 512, 00:11:38.005 "num_blocks": 65536, 00:11:38.005 "uuid": "6346dee6-1987-4678-89d7-001ef0b09fac", 00:11:38.005 "assigned_rate_limits": { 00:11:38.005 "rw_ios_per_sec": 0, 00:11:38.005 "rw_mbytes_per_sec": 0, 00:11:38.005 "r_mbytes_per_sec": 0, 00:11:38.005 "w_mbytes_per_sec": 0 00:11:38.005 }, 00:11:38.005 "claimed": true, 00:11:38.005 "claim_type": "exclusive_write", 00:11:38.005 "zoned": false, 00:11:38.005 "supported_io_types": { 00:11:38.005 "read": true, 00:11:38.005 "write": true, 00:11:38.005 "unmap": true, 00:11:38.005 "flush": true, 00:11:38.005 "reset": true, 00:11:38.005 "nvme_admin": false, 00:11:38.005 "nvme_io": false, 00:11:38.005 "nvme_io_md": false, 00:11:38.005 "write_zeroes": true, 00:11:38.005 "zcopy": true, 00:11:38.005 "get_zone_info": false, 00:11:38.005 "zone_management": false, 00:11:38.005 "zone_append": false, 00:11:38.005 "compare": false, 00:11:38.005 "compare_and_write": false, 00:11:38.005 "abort": true, 00:11:38.005 "seek_hole": false, 00:11:38.005 "seek_data": false, 00:11:38.005 "copy": true, 00:11:38.005 "nvme_iov_md": false 00:11:38.005 }, 00:11:38.005 "memory_domains": [ 00:11:38.005 { 00:11:38.005 "dma_device_id": "system", 00:11:38.005 "dma_device_type": 1 00:11:38.005 }, 00:11:38.005 { 00:11:38.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.005 "dma_device_type": 2 00:11:38.005 } 00:11:38.005 ], 00:11:38.005 "driver_specific": {} 00:11:38.005 } 00:11:38.005 ] 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.005 14:37:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.005 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.005 "name": "Existed_Raid", 00:11:38.005 "uuid": "29671cbd-03d5-47dc-9acb-5e61128da4a7", 00:11:38.005 "strip_size_kb": 64, 00:11:38.005 "state": "online", 00:11:38.005 "raid_level": "concat", 00:11:38.005 "superblock": true, 00:11:38.005 "num_base_bdevs": 3, 00:11:38.005 "num_base_bdevs_discovered": 3, 00:11:38.005 "num_base_bdevs_operational": 3, 00:11:38.005 "base_bdevs_list": [ 00:11:38.005 { 00:11:38.005 "name": "BaseBdev1", 00:11:38.005 "uuid": "0cebc8da-466b-4b4a-a454-9473bd58fee9", 00:11:38.005 "is_configured": true, 00:11:38.005 "data_offset": 2048, 00:11:38.005 "data_size": 63488 00:11:38.005 }, 00:11:38.005 { 00:11:38.005 "name": "BaseBdev2", 00:11:38.005 "uuid": "5c901908-222a-47fd-80a9-098552e9e66b", 00:11:38.005 "is_configured": true, 00:11:38.005 "data_offset": 2048, 00:11:38.005 "data_size": 63488 00:11:38.005 }, 00:11:38.005 { 00:11:38.005 "name": "BaseBdev3", 00:11:38.005 "uuid": "6346dee6-1987-4678-89d7-001ef0b09fac", 00:11:38.005 "is_configured": true, 00:11:38.005 "data_offset": 2048, 00:11:38.005 "data_size": 63488 00:11:38.005 } 00:11:38.005 ] 00:11:38.005 }' 00:11:38.005 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.005 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.573 [2024-11-04 14:37:37.504179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.573 "name": "Existed_Raid", 00:11:38.573 "aliases": [ 00:11:38.573 "29671cbd-03d5-47dc-9acb-5e61128da4a7" 00:11:38.573 ], 00:11:38.573 "product_name": "Raid Volume", 00:11:38.573 "block_size": 512, 00:11:38.573 "num_blocks": 190464, 00:11:38.573 "uuid": "29671cbd-03d5-47dc-9acb-5e61128da4a7", 00:11:38.573 "assigned_rate_limits": { 00:11:38.573 "rw_ios_per_sec": 0, 00:11:38.573 "rw_mbytes_per_sec": 0, 00:11:38.573 "r_mbytes_per_sec": 0, 00:11:38.573 "w_mbytes_per_sec": 0 00:11:38.573 }, 00:11:38.573 "claimed": false, 00:11:38.573 "zoned": false, 00:11:38.573 "supported_io_types": { 00:11:38.573 "read": true, 00:11:38.573 "write": true, 00:11:38.573 "unmap": true, 00:11:38.573 "flush": true, 00:11:38.573 "reset": true, 00:11:38.573 "nvme_admin": false, 00:11:38.573 "nvme_io": false, 00:11:38.573 "nvme_io_md": false, 00:11:38.573 "write_zeroes": true, 00:11:38.573 "zcopy": false, 00:11:38.573 "get_zone_info": false, 00:11:38.573 "zone_management": false, 00:11:38.573 "zone_append": false, 00:11:38.573 "compare": false, 00:11:38.573 "compare_and_write": false, 00:11:38.573 "abort": false, 00:11:38.573 "seek_hole": false, 00:11:38.573 "seek_data": false, 00:11:38.573 "copy": false, 00:11:38.573 "nvme_iov_md": false 00:11:38.573 }, 00:11:38.573 "memory_domains": [ 00:11:38.573 { 00:11:38.573 "dma_device_id": "system", 00:11:38.573 "dma_device_type": 1 00:11:38.573 }, 00:11:38.573 { 00:11:38.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.573 "dma_device_type": 2 00:11:38.573 }, 00:11:38.573 { 00:11:38.573 "dma_device_id": "system", 00:11:38.573 "dma_device_type": 1 00:11:38.573 }, 00:11:38.573 { 00:11:38.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.573 "dma_device_type": 2 00:11:38.573 }, 00:11:38.573 { 00:11:38.573 "dma_device_id": "system", 00:11:38.573 "dma_device_type": 1 00:11:38.573 }, 00:11:38.573 { 00:11:38.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.573 "dma_device_type": 2 00:11:38.573 } 00:11:38.573 ], 00:11:38.573 "driver_specific": { 00:11:38.573 "raid": { 00:11:38.573 "uuid": "29671cbd-03d5-47dc-9acb-5e61128da4a7", 00:11:38.573 "strip_size_kb": 64, 00:11:38.573 "state": "online", 00:11:38.573 "raid_level": "concat", 00:11:38.573 "superblock": true, 00:11:38.573 "num_base_bdevs": 3, 00:11:38.573 "num_base_bdevs_discovered": 3, 00:11:38.573 "num_base_bdevs_operational": 3, 00:11:38.573 "base_bdevs_list": [ 00:11:38.573 { 00:11:38.573 "name": "BaseBdev1", 00:11:38.573 "uuid": "0cebc8da-466b-4b4a-a454-9473bd58fee9", 00:11:38.573 "is_configured": true, 00:11:38.573 "data_offset": 2048, 00:11:38.573 "data_size": 63488 00:11:38.573 }, 00:11:38.573 { 00:11:38.573 "name": "BaseBdev2", 00:11:38.573 "uuid": "5c901908-222a-47fd-80a9-098552e9e66b", 00:11:38.573 "is_configured": true, 00:11:38.573 "data_offset": 2048, 00:11:38.573 "data_size": 63488 00:11:38.573 }, 00:11:38.573 { 00:11:38.573 "name": "BaseBdev3", 00:11:38.573 "uuid": "6346dee6-1987-4678-89d7-001ef0b09fac", 00:11:38.573 "is_configured": true, 00:11:38.573 "data_offset": 2048, 00:11:38.573 "data_size": 63488 00:11:38.573 } 00:11:38.573 ] 00:11:38.573 } 00:11:38.573 } 00:11:38.573 }' 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:38.573 BaseBdev2 00:11:38.573 BaseBdev3' 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.573 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.833 [2024-11-04 14:37:37.811911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.833 [2024-11-04 14:37:37.811960] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.833 [2024-11-04 14:37:37.812033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.833 "name": "Existed_Raid", 00:11:38.833 "uuid": "29671cbd-03d5-47dc-9acb-5e61128da4a7", 00:11:38.833 "strip_size_kb": 64, 00:11:38.833 "state": "offline", 00:11:38.833 "raid_level": "concat", 00:11:38.833 "superblock": true, 00:11:38.833 "num_base_bdevs": 3, 00:11:38.833 "num_base_bdevs_discovered": 2, 00:11:38.833 "num_base_bdevs_operational": 2, 00:11:38.833 "base_bdevs_list": [ 00:11:38.833 { 00:11:38.833 "name": null, 00:11:38.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.833 "is_configured": false, 00:11:38.833 "data_offset": 0, 00:11:38.833 "data_size": 63488 00:11:38.833 }, 00:11:38.833 { 00:11:38.833 "name": "BaseBdev2", 00:11:38.833 "uuid": "5c901908-222a-47fd-80a9-098552e9e66b", 00:11:38.833 "is_configured": true, 00:11:38.833 "data_offset": 2048, 00:11:38.833 "data_size": 63488 00:11:38.833 }, 00:11:38.833 { 00:11:38.833 "name": "BaseBdev3", 00:11:38.833 "uuid": "6346dee6-1987-4678-89d7-001ef0b09fac", 00:11:38.833 "is_configured": true, 00:11:38.833 "data_offset": 2048, 00:11:38.833 "data_size": 63488 00:11:38.833 } 00:11:38.833 ] 00:11:38.833 }' 00:11:38.833 14:37:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.091 14:37:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.349 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:39.349 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.349 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.349 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.349 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.349 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.349 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.608 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.608 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.608 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:39.608 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.608 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.608 [2024-11-04 14:37:38.488647] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.609 [2024-11-04 14:37:38.628831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.609 [2024-11-04 14:37:38.628889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:39.609 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.868 BaseBdev2 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.868 [ 00:11:39.868 { 00:11:39.868 "name": "BaseBdev2", 00:11:39.868 "aliases": [ 00:11:39.868 "6aaa4db5-6a8d-446b-b1a8-ceb148965917" 00:11:39.868 ], 00:11:39.868 "product_name": "Malloc disk", 00:11:39.868 "block_size": 512, 00:11:39.868 "num_blocks": 65536, 00:11:39.868 "uuid": "6aaa4db5-6a8d-446b-b1a8-ceb148965917", 00:11:39.868 "assigned_rate_limits": { 00:11:39.868 "rw_ios_per_sec": 0, 00:11:39.868 "rw_mbytes_per_sec": 0, 00:11:39.868 "r_mbytes_per_sec": 0, 00:11:39.868 "w_mbytes_per_sec": 0 00:11:39.868 }, 00:11:39.868 "claimed": false, 00:11:39.868 "zoned": false, 00:11:39.868 "supported_io_types": { 00:11:39.868 "read": true, 00:11:39.868 "write": true, 00:11:39.868 "unmap": true, 00:11:39.868 "flush": true, 00:11:39.868 "reset": true, 00:11:39.868 "nvme_admin": false, 00:11:39.868 "nvme_io": false, 00:11:39.868 "nvme_io_md": false, 00:11:39.868 "write_zeroes": true, 00:11:39.868 "zcopy": true, 00:11:39.868 "get_zone_info": false, 00:11:39.868 "zone_management": false, 00:11:39.868 "zone_append": false, 00:11:39.868 "compare": false, 00:11:39.868 "compare_and_write": false, 00:11:39.868 "abort": true, 00:11:39.868 "seek_hole": false, 00:11:39.868 "seek_data": false, 00:11:39.868 "copy": true, 00:11:39.868 "nvme_iov_md": false 00:11:39.868 }, 00:11:39.868 "memory_domains": [ 00:11:39.868 { 00:11:39.868 "dma_device_id": "system", 00:11:39.868 "dma_device_type": 1 00:11:39.868 }, 00:11:39.868 { 00:11:39.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.868 "dma_device_type": 2 00:11:39.868 } 00:11:39.868 ], 00:11:39.868 "driver_specific": {} 00:11:39.868 } 00:11:39.868 ] 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.868 BaseBdev3 00:11:39.868 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.869 [ 00:11:39.869 { 00:11:39.869 "name": "BaseBdev3", 00:11:39.869 "aliases": [ 00:11:39.869 "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5" 00:11:39.869 ], 00:11:39.869 "product_name": "Malloc disk", 00:11:39.869 "block_size": 512, 00:11:39.869 "num_blocks": 65536, 00:11:39.869 "uuid": "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5", 00:11:39.869 "assigned_rate_limits": { 00:11:39.869 "rw_ios_per_sec": 0, 00:11:39.869 "rw_mbytes_per_sec": 0, 00:11:39.869 "r_mbytes_per_sec": 0, 00:11:39.869 "w_mbytes_per_sec": 0 00:11:39.869 }, 00:11:39.869 "claimed": false, 00:11:39.869 "zoned": false, 00:11:39.869 "supported_io_types": { 00:11:39.869 "read": true, 00:11:39.869 "write": true, 00:11:39.869 "unmap": true, 00:11:39.869 "flush": true, 00:11:39.869 "reset": true, 00:11:39.869 "nvme_admin": false, 00:11:39.869 "nvme_io": false, 00:11:39.869 "nvme_io_md": false, 00:11:39.869 "write_zeroes": true, 00:11:39.869 "zcopy": true, 00:11:39.869 "get_zone_info": false, 00:11:39.869 "zone_management": false, 00:11:39.869 "zone_append": false, 00:11:39.869 "compare": false, 00:11:39.869 "compare_and_write": false, 00:11:39.869 "abort": true, 00:11:39.869 "seek_hole": false, 00:11:39.869 "seek_data": false, 00:11:39.869 "copy": true, 00:11:39.869 "nvme_iov_md": false 00:11:39.869 }, 00:11:39.869 "memory_domains": [ 00:11:39.869 { 00:11:39.869 "dma_device_id": "system", 00:11:39.869 "dma_device_type": 1 00:11:39.869 }, 00:11:39.869 { 00:11:39.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.869 "dma_device_type": 2 00:11:39.869 } 00:11:39.869 ], 00:11:39.869 "driver_specific": {} 00:11:39.869 } 00:11:39.869 ] 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.869 [2024-11-04 14:37:38.929258] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.869 [2024-11-04 14:37:38.929328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.869 [2024-11-04 14:37:38.929374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.869 [2024-11-04 14:37:38.931927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.869 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.127 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.127 "name": "Existed_Raid", 00:11:40.127 "uuid": "da40732c-d452-49dd-bdc7-feac4169e7b2", 00:11:40.127 "strip_size_kb": 64, 00:11:40.127 "state": "configuring", 00:11:40.127 "raid_level": "concat", 00:11:40.127 "superblock": true, 00:11:40.127 "num_base_bdevs": 3, 00:11:40.127 "num_base_bdevs_discovered": 2, 00:11:40.127 "num_base_bdevs_operational": 3, 00:11:40.127 "base_bdevs_list": [ 00:11:40.127 { 00:11:40.127 "name": "BaseBdev1", 00:11:40.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.127 "is_configured": false, 00:11:40.127 "data_offset": 0, 00:11:40.127 "data_size": 0 00:11:40.127 }, 00:11:40.127 { 00:11:40.127 "name": "BaseBdev2", 00:11:40.127 "uuid": "6aaa4db5-6a8d-446b-b1a8-ceb148965917", 00:11:40.127 "is_configured": true, 00:11:40.127 "data_offset": 2048, 00:11:40.127 "data_size": 63488 00:11:40.127 }, 00:11:40.127 { 00:11:40.127 "name": "BaseBdev3", 00:11:40.127 "uuid": "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5", 00:11:40.127 "is_configured": true, 00:11:40.127 "data_offset": 2048, 00:11:40.127 "data_size": 63488 00:11:40.127 } 00:11:40.127 ] 00:11:40.127 }' 00:11:40.127 14:37:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.127 14:37:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.387 [2024-11-04 14:37:39.457400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.387 14:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.651 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.651 "name": "Existed_Raid", 00:11:40.651 "uuid": "da40732c-d452-49dd-bdc7-feac4169e7b2", 00:11:40.651 "strip_size_kb": 64, 00:11:40.651 "state": "configuring", 00:11:40.651 "raid_level": "concat", 00:11:40.651 "superblock": true, 00:11:40.651 "num_base_bdevs": 3, 00:11:40.651 "num_base_bdevs_discovered": 1, 00:11:40.651 "num_base_bdevs_operational": 3, 00:11:40.651 "base_bdevs_list": [ 00:11:40.651 { 00:11:40.651 "name": "BaseBdev1", 00:11:40.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.652 "is_configured": false, 00:11:40.652 "data_offset": 0, 00:11:40.652 "data_size": 0 00:11:40.652 }, 00:11:40.652 { 00:11:40.652 "name": null, 00:11:40.652 "uuid": "6aaa4db5-6a8d-446b-b1a8-ceb148965917", 00:11:40.652 "is_configured": false, 00:11:40.652 "data_offset": 0, 00:11:40.652 "data_size": 63488 00:11:40.652 }, 00:11:40.652 { 00:11:40.652 "name": "BaseBdev3", 00:11:40.652 "uuid": "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5", 00:11:40.652 "is_configured": true, 00:11:40.652 "data_offset": 2048, 00:11:40.652 "data_size": 63488 00:11:40.652 } 00:11:40.652 ] 00:11:40.652 }' 00:11:40.652 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.652 14:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.926 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:40.926 14:37:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.926 14:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.926 14:37:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.926 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.926 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:40.926 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.926 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.926 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.184 [2024-11-04 14:37:40.072613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.184 BaseBdev1 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.184 [ 00:11:41.184 { 00:11:41.184 "name": "BaseBdev1", 00:11:41.184 "aliases": [ 00:11:41.184 "d95c58b0-71cf-419a-97a0-dbe7a717f48b" 00:11:41.184 ], 00:11:41.184 "product_name": "Malloc disk", 00:11:41.184 "block_size": 512, 00:11:41.184 "num_blocks": 65536, 00:11:41.184 "uuid": "d95c58b0-71cf-419a-97a0-dbe7a717f48b", 00:11:41.184 "assigned_rate_limits": { 00:11:41.184 "rw_ios_per_sec": 0, 00:11:41.184 "rw_mbytes_per_sec": 0, 00:11:41.184 "r_mbytes_per_sec": 0, 00:11:41.184 "w_mbytes_per_sec": 0 00:11:41.184 }, 00:11:41.184 "claimed": true, 00:11:41.184 "claim_type": "exclusive_write", 00:11:41.184 "zoned": false, 00:11:41.184 "supported_io_types": { 00:11:41.184 "read": true, 00:11:41.184 "write": true, 00:11:41.184 "unmap": true, 00:11:41.184 "flush": true, 00:11:41.184 "reset": true, 00:11:41.184 "nvme_admin": false, 00:11:41.184 "nvme_io": false, 00:11:41.184 "nvme_io_md": false, 00:11:41.184 "write_zeroes": true, 00:11:41.184 "zcopy": true, 00:11:41.184 "get_zone_info": false, 00:11:41.184 "zone_management": false, 00:11:41.184 "zone_append": false, 00:11:41.184 "compare": false, 00:11:41.184 "compare_and_write": false, 00:11:41.184 "abort": true, 00:11:41.184 "seek_hole": false, 00:11:41.184 "seek_data": false, 00:11:41.184 "copy": true, 00:11:41.184 "nvme_iov_md": false 00:11:41.184 }, 00:11:41.184 "memory_domains": [ 00:11:41.184 { 00:11:41.184 "dma_device_id": "system", 00:11:41.184 "dma_device_type": 1 00:11:41.184 }, 00:11:41.184 { 00:11:41.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.184 "dma_device_type": 2 00:11:41.184 } 00:11:41.184 ], 00:11:41.184 "driver_specific": {} 00:11:41.184 } 00:11:41.184 ] 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.184 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.184 "name": "Existed_Raid", 00:11:41.184 "uuid": "da40732c-d452-49dd-bdc7-feac4169e7b2", 00:11:41.184 "strip_size_kb": 64, 00:11:41.184 "state": "configuring", 00:11:41.184 "raid_level": "concat", 00:11:41.184 "superblock": true, 00:11:41.184 "num_base_bdevs": 3, 00:11:41.184 "num_base_bdevs_discovered": 2, 00:11:41.184 "num_base_bdevs_operational": 3, 00:11:41.184 "base_bdevs_list": [ 00:11:41.184 { 00:11:41.184 "name": "BaseBdev1", 00:11:41.184 "uuid": "d95c58b0-71cf-419a-97a0-dbe7a717f48b", 00:11:41.184 "is_configured": true, 00:11:41.184 "data_offset": 2048, 00:11:41.184 "data_size": 63488 00:11:41.185 }, 00:11:41.185 { 00:11:41.185 "name": null, 00:11:41.185 "uuid": "6aaa4db5-6a8d-446b-b1a8-ceb148965917", 00:11:41.185 "is_configured": false, 00:11:41.185 "data_offset": 0, 00:11:41.185 "data_size": 63488 00:11:41.185 }, 00:11:41.185 { 00:11:41.185 "name": "BaseBdev3", 00:11:41.185 "uuid": "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5", 00:11:41.185 "is_configured": true, 00:11:41.185 "data_offset": 2048, 00:11:41.185 "data_size": 63488 00:11:41.185 } 00:11:41.185 ] 00:11:41.185 }' 00:11:41.185 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.185 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.751 [2024-11-04 14:37:40.652787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.751 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.751 "name": "Existed_Raid", 00:11:41.751 "uuid": "da40732c-d452-49dd-bdc7-feac4169e7b2", 00:11:41.751 "strip_size_kb": 64, 00:11:41.751 "state": "configuring", 00:11:41.751 "raid_level": "concat", 00:11:41.751 "superblock": true, 00:11:41.751 "num_base_bdevs": 3, 00:11:41.751 "num_base_bdevs_discovered": 1, 00:11:41.751 "num_base_bdevs_operational": 3, 00:11:41.751 "base_bdevs_list": [ 00:11:41.751 { 00:11:41.751 "name": "BaseBdev1", 00:11:41.751 "uuid": "d95c58b0-71cf-419a-97a0-dbe7a717f48b", 00:11:41.751 "is_configured": true, 00:11:41.751 "data_offset": 2048, 00:11:41.751 "data_size": 63488 00:11:41.751 }, 00:11:41.751 { 00:11:41.751 "name": null, 00:11:41.752 "uuid": "6aaa4db5-6a8d-446b-b1a8-ceb148965917", 00:11:41.752 "is_configured": false, 00:11:41.752 "data_offset": 0, 00:11:41.752 "data_size": 63488 00:11:41.752 }, 00:11:41.752 { 00:11:41.752 "name": null, 00:11:41.752 "uuid": "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5", 00:11:41.752 "is_configured": false, 00:11:41.752 "data_offset": 0, 00:11:41.752 "data_size": 63488 00:11:41.752 } 00:11:41.752 ] 00:11:41.752 }' 00:11:41.752 14:37:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.752 14:37:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.348 [2024-11-04 14:37:41.213001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.348 "name": "Existed_Raid", 00:11:42.348 "uuid": "da40732c-d452-49dd-bdc7-feac4169e7b2", 00:11:42.348 "strip_size_kb": 64, 00:11:42.348 "state": "configuring", 00:11:42.348 "raid_level": "concat", 00:11:42.348 "superblock": true, 00:11:42.348 "num_base_bdevs": 3, 00:11:42.348 "num_base_bdevs_discovered": 2, 00:11:42.348 "num_base_bdevs_operational": 3, 00:11:42.348 "base_bdevs_list": [ 00:11:42.348 { 00:11:42.348 "name": "BaseBdev1", 00:11:42.348 "uuid": "d95c58b0-71cf-419a-97a0-dbe7a717f48b", 00:11:42.348 "is_configured": true, 00:11:42.348 "data_offset": 2048, 00:11:42.348 "data_size": 63488 00:11:42.348 }, 00:11:42.348 { 00:11:42.348 "name": null, 00:11:42.348 "uuid": "6aaa4db5-6a8d-446b-b1a8-ceb148965917", 00:11:42.348 "is_configured": false, 00:11:42.348 "data_offset": 0, 00:11:42.348 "data_size": 63488 00:11:42.348 }, 00:11:42.348 { 00:11:42.348 "name": "BaseBdev3", 00:11:42.348 "uuid": "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5", 00:11:42.348 "is_configured": true, 00:11:42.348 "data_offset": 2048, 00:11:42.348 "data_size": 63488 00:11:42.348 } 00:11:42.348 ] 00:11:42.348 }' 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.348 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.606 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.606 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.606 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:42.606 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.606 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.865 [2024-11-04 14:37:41.757162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.865 "name": "Existed_Raid", 00:11:42.865 "uuid": "da40732c-d452-49dd-bdc7-feac4169e7b2", 00:11:42.865 "strip_size_kb": 64, 00:11:42.865 "state": "configuring", 00:11:42.865 "raid_level": "concat", 00:11:42.865 "superblock": true, 00:11:42.865 "num_base_bdevs": 3, 00:11:42.865 "num_base_bdevs_discovered": 1, 00:11:42.865 "num_base_bdevs_operational": 3, 00:11:42.865 "base_bdevs_list": [ 00:11:42.865 { 00:11:42.865 "name": null, 00:11:42.865 "uuid": "d95c58b0-71cf-419a-97a0-dbe7a717f48b", 00:11:42.865 "is_configured": false, 00:11:42.865 "data_offset": 0, 00:11:42.865 "data_size": 63488 00:11:42.865 }, 00:11:42.865 { 00:11:42.865 "name": null, 00:11:42.865 "uuid": "6aaa4db5-6a8d-446b-b1a8-ceb148965917", 00:11:42.865 "is_configured": false, 00:11:42.865 "data_offset": 0, 00:11:42.865 "data_size": 63488 00:11:42.865 }, 00:11:42.865 { 00:11:42.865 "name": "BaseBdev3", 00:11:42.865 "uuid": "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5", 00:11:42.865 "is_configured": true, 00:11:42.865 "data_offset": 2048, 00:11:42.865 "data_size": 63488 00:11:42.865 } 00:11:42.865 ] 00:11:42.865 }' 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.865 14:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.434 [2024-11-04 14:37:42.401493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.434 "name": "Existed_Raid", 00:11:43.434 "uuid": "da40732c-d452-49dd-bdc7-feac4169e7b2", 00:11:43.434 "strip_size_kb": 64, 00:11:43.434 "state": "configuring", 00:11:43.434 "raid_level": "concat", 00:11:43.434 "superblock": true, 00:11:43.434 "num_base_bdevs": 3, 00:11:43.434 "num_base_bdevs_discovered": 2, 00:11:43.434 "num_base_bdevs_operational": 3, 00:11:43.434 "base_bdevs_list": [ 00:11:43.434 { 00:11:43.434 "name": null, 00:11:43.434 "uuid": "d95c58b0-71cf-419a-97a0-dbe7a717f48b", 00:11:43.434 "is_configured": false, 00:11:43.434 "data_offset": 0, 00:11:43.434 "data_size": 63488 00:11:43.434 }, 00:11:43.434 { 00:11:43.434 "name": "BaseBdev2", 00:11:43.434 "uuid": "6aaa4db5-6a8d-446b-b1a8-ceb148965917", 00:11:43.434 "is_configured": true, 00:11:43.434 "data_offset": 2048, 00:11:43.434 "data_size": 63488 00:11:43.434 }, 00:11:43.434 { 00:11:43.434 "name": "BaseBdev3", 00:11:43.434 "uuid": "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5", 00:11:43.434 "is_configured": true, 00:11:43.434 "data_offset": 2048, 00:11:43.434 "data_size": 63488 00:11:43.434 } 00:11:43.434 ] 00:11:43.434 }' 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.434 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.002 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.002 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:44.002 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.002 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.002 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.002 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:44.002 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:44.002 14:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.003 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.003 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.003 14:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d95c58b0-71cf-419a-97a0-dbe7a717f48b 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.003 [2024-11-04 14:37:43.043552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:44.003 [2024-11-04 14:37:43.043856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:44.003 [2024-11-04 14:37:43.043881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:44.003 [2024-11-04 14:37:43.044208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:44.003 [2024-11-04 14:37:43.044398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:44.003 [2024-11-04 14:37:43.044423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:44.003 NewBaseBdev 00:11:44.003 [2024-11-04 14:37:43.044585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.003 [ 00:11:44.003 { 00:11:44.003 "name": "NewBaseBdev", 00:11:44.003 "aliases": [ 00:11:44.003 "d95c58b0-71cf-419a-97a0-dbe7a717f48b" 00:11:44.003 ], 00:11:44.003 "product_name": "Malloc disk", 00:11:44.003 "block_size": 512, 00:11:44.003 "num_blocks": 65536, 00:11:44.003 "uuid": "d95c58b0-71cf-419a-97a0-dbe7a717f48b", 00:11:44.003 "assigned_rate_limits": { 00:11:44.003 "rw_ios_per_sec": 0, 00:11:44.003 "rw_mbytes_per_sec": 0, 00:11:44.003 "r_mbytes_per_sec": 0, 00:11:44.003 "w_mbytes_per_sec": 0 00:11:44.003 }, 00:11:44.003 "claimed": true, 00:11:44.003 "claim_type": "exclusive_write", 00:11:44.003 "zoned": false, 00:11:44.003 "supported_io_types": { 00:11:44.003 "read": true, 00:11:44.003 "write": true, 00:11:44.003 "unmap": true, 00:11:44.003 "flush": true, 00:11:44.003 "reset": true, 00:11:44.003 "nvme_admin": false, 00:11:44.003 "nvme_io": false, 00:11:44.003 "nvme_io_md": false, 00:11:44.003 "write_zeroes": true, 00:11:44.003 "zcopy": true, 00:11:44.003 "get_zone_info": false, 00:11:44.003 "zone_management": false, 00:11:44.003 "zone_append": false, 00:11:44.003 "compare": false, 00:11:44.003 "compare_and_write": false, 00:11:44.003 "abort": true, 00:11:44.003 "seek_hole": false, 00:11:44.003 "seek_data": false, 00:11:44.003 "copy": true, 00:11:44.003 "nvme_iov_md": false 00:11:44.003 }, 00:11:44.003 "memory_domains": [ 00:11:44.003 { 00:11:44.003 "dma_device_id": "system", 00:11:44.003 "dma_device_type": 1 00:11:44.003 }, 00:11:44.003 { 00:11:44.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.003 "dma_device_type": 2 00:11:44.003 } 00:11:44.003 ], 00:11:44.003 "driver_specific": {} 00:11:44.003 } 00:11:44.003 ] 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.003 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.262 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.262 "name": "Existed_Raid", 00:11:44.262 "uuid": "da40732c-d452-49dd-bdc7-feac4169e7b2", 00:11:44.262 "strip_size_kb": 64, 00:11:44.262 "state": "online", 00:11:44.262 "raid_level": "concat", 00:11:44.262 "superblock": true, 00:11:44.262 "num_base_bdevs": 3, 00:11:44.262 "num_base_bdevs_discovered": 3, 00:11:44.262 "num_base_bdevs_operational": 3, 00:11:44.262 "base_bdevs_list": [ 00:11:44.262 { 00:11:44.262 "name": "NewBaseBdev", 00:11:44.262 "uuid": "d95c58b0-71cf-419a-97a0-dbe7a717f48b", 00:11:44.262 "is_configured": true, 00:11:44.262 "data_offset": 2048, 00:11:44.262 "data_size": 63488 00:11:44.262 }, 00:11:44.262 { 00:11:44.262 "name": "BaseBdev2", 00:11:44.262 "uuid": "6aaa4db5-6a8d-446b-b1a8-ceb148965917", 00:11:44.262 "is_configured": true, 00:11:44.262 "data_offset": 2048, 00:11:44.262 "data_size": 63488 00:11:44.262 }, 00:11:44.262 { 00:11:44.262 "name": "BaseBdev3", 00:11:44.262 "uuid": "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5", 00:11:44.262 "is_configured": true, 00:11:44.262 "data_offset": 2048, 00:11:44.262 "data_size": 63488 00:11:44.262 } 00:11:44.262 ] 00:11:44.262 }' 00:11:44.262 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.262 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.521 [2024-11-04 14:37:43.588160] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.521 "name": "Existed_Raid", 00:11:44.521 "aliases": [ 00:11:44.521 "da40732c-d452-49dd-bdc7-feac4169e7b2" 00:11:44.521 ], 00:11:44.521 "product_name": "Raid Volume", 00:11:44.521 "block_size": 512, 00:11:44.521 "num_blocks": 190464, 00:11:44.521 "uuid": "da40732c-d452-49dd-bdc7-feac4169e7b2", 00:11:44.521 "assigned_rate_limits": { 00:11:44.521 "rw_ios_per_sec": 0, 00:11:44.521 "rw_mbytes_per_sec": 0, 00:11:44.521 "r_mbytes_per_sec": 0, 00:11:44.521 "w_mbytes_per_sec": 0 00:11:44.521 }, 00:11:44.521 "claimed": false, 00:11:44.521 "zoned": false, 00:11:44.521 "supported_io_types": { 00:11:44.521 "read": true, 00:11:44.521 "write": true, 00:11:44.521 "unmap": true, 00:11:44.521 "flush": true, 00:11:44.521 "reset": true, 00:11:44.521 "nvme_admin": false, 00:11:44.521 "nvme_io": false, 00:11:44.521 "nvme_io_md": false, 00:11:44.521 "write_zeroes": true, 00:11:44.521 "zcopy": false, 00:11:44.521 "get_zone_info": false, 00:11:44.521 "zone_management": false, 00:11:44.521 "zone_append": false, 00:11:44.521 "compare": false, 00:11:44.521 "compare_and_write": false, 00:11:44.521 "abort": false, 00:11:44.521 "seek_hole": false, 00:11:44.521 "seek_data": false, 00:11:44.521 "copy": false, 00:11:44.521 "nvme_iov_md": false 00:11:44.521 }, 00:11:44.521 "memory_domains": [ 00:11:44.521 { 00:11:44.521 "dma_device_id": "system", 00:11:44.521 "dma_device_type": 1 00:11:44.521 }, 00:11:44.521 { 00:11:44.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.521 "dma_device_type": 2 00:11:44.521 }, 00:11:44.521 { 00:11:44.521 "dma_device_id": "system", 00:11:44.521 "dma_device_type": 1 00:11:44.521 }, 00:11:44.521 { 00:11:44.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.521 "dma_device_type": 2 00:11:44.521 }, 00:11:44.521 { 00:11:44.521 "dma_device_id": "system", 00:11:44.521 "dma_device_type": 1 00:11:44.521 }, 00:11:44.521 { 00:11:44.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.521 "dma_device_type": 2 00:11:44.521 } 00:11:44.521 ], 00:11:44.521 "driver_specific": { 00:11:44.521 "raid": { 00:11:44.521 "uuid": "da40732c-d452-49dd-bdc7-feac4169e7b2", 00:11:44.521 "strip_size_kb": 64, 00:11:44.521 "state": "online", 00:11:44.521 "raid_level": "concat", 00:11:44.521 "superblock": true, 00:11:44.521 "num_base_bdevs": 3, 00:11:44.521 "num_base_bdevs_discovered": 3, 00:11:44.521 "num_base_bdevs_operational": 3, 00:11:44.521 "base_bdevs_list": [ 00:11:44.521 { 00:11:44.521 "name": "NewBaseBdev", 00:11:44.521 "uuid": "d95c58b0-71cf-419a-97a0-dbe7a717f48b", 00:11:44.521 "is_configured": true, 00:11:44.521 "data_offset": 2048, 00:11:44.521 "data_size": 63488 00:11:44.521 }, 00:11:44.521 { 00:11:44.521 "name": "BaseBdev2", 00:11:44.521 "uuid": "6aaa4db5-6a8d-446b-b1a8-ceb148965917", 00:11:44.521 "is_configured": true, 00:11:44.521 "data_offset": 2048, 00:11:44.521 "data_size": 63488 00:11:44.521 }, 00:11:44.521 { 00:11:44.521 "name": "BaseBdev3", 00:11:44.521 "uuid": "9db6a921-b5fc-4c19-a726-4a9cda5ef8b5", 00:11:44.521 "is_configured": true, 00:11:44.521 "data_offset": 2048, 00:11:44.521 "data_size": 63488 00:11:44.521 } 00:11:44.521 ] 00:11:44.521 } 00:11:44.521 } 00:11:44.521 }' 00:11:44.521 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.780 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:44.780 BaseBdev2 00:11:44.780 BaseBdev3' 00:11:44.780 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.780 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.780 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.780 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:44.780 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.781 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.040 [2024-11-04 14:37:43.903860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:45.040 [2024-11-04 14:37:43.903897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.040 [2024-11-04 14:37:43.904025] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.040 [2024-11-04 14:37:43.904100] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:45.040 [2024-11-04 14:37:43.904121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66260 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66260 ']' 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66260 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66260 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:45.040 killing process with pid 66260 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66260' 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66260 00:11:45.040 [2024-11-04 14:37:43.941345] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:45.040 14:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66260 00:11:45.298 [2024-11-04 14:37:44.212079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.243 14:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:46.243 00:11:46.243 real 0m11.700s 00:11:46.243 user 0m19.468s 00:11:46.243 sys 0m1.555s 00:11:46.243 14:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:46.243 14:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.243 ************************************ 00:11:46.243 END TEST raid_state_function_test_sb 00:11:46.243 ************************************ 00:11:46.243 14:37:45 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:46.243 14:37:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:46.243 14:37:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:46.243 14:37:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.243 ************************************ 00:11:46.243 START TEST raid_superblock_test 00:11:46.243 ************************************ 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66887 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66887 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 66887 ']' 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:46.243 14:37:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.501 [2024-11-04 14:37:45.381214] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:11:46.501 [2024-11-04 14:37:45.381376] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66887 ] 00:11:46.501 [2024-11-04 14:37:45.554061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.759 [2024-11-04 14:37:45.682660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.017 [2024-11-04 14:37:45.884317] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.017 [2024-11-04 14:37:45.884387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.275 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:47.275 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:47.275 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:47.275 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.276 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:47.276 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:47.276 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:47.276 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.276 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.276 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.276 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:47.276 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.276 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.536 malloc1 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.536 [2024-11-04 14:37:46.435059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:47.536 [2024-11-04 14:37:46.435141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.536 [2024-11-04 14:37:46.435176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:47.536 [2024-11-04 14:37:46.435193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.536 [2024-11-04 14:37:46.437979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.536 [2024-11-04 14:37:46.438024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:47.536 pt1 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.536 malloc2 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.536 [2024-11-04 14:37:46.490684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:47.536 [2024-11-04 14:37:46.490752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.536 [2024-11-04 14:37:46.490784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:47.536 [2024-11-04 14:37:46.490798] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.536 [2024-11-04 14:37:46.493499] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.536 [2024-11-04 14:37:46.493546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:47.536 pt2 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.536 malloc3 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.536 [2024-11-04 14:37:46.556373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:47.536 [2024-11-04 14:37:46.556444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.536 [2024-11-04 14:37:46.556479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:47.536 [2024-11-04 14:37:46.556494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.536 [2024-11-04 14:37:46.559285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.536 [2024-11-04 14:37:46.559330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:47.536 pt3 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.536 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.537 [2024-11-04 14:37:46.564430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:47.537 [2024-11-04 14:37:46.566848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:47.537 [2024-11-04 14:37:46.566963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:47.537 [2024-11-04 14:37:46.567183] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:47.537 [2024-11-04 14:37:46.567211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:47.537 [2024-11-04 14:37:46.567540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:47.537 [2024-11-04 14:37:46.567757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:47.537 [2024-11-04 14:37:46.567774] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:47.537 [2024-11-04 14:37:46.567987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.537 "name": "raid_bdev1", 00:11:47.537 "uuid": "7ad044db-7fd4-4af6-9cad-c82dceff3966", 00:11:47.537 "strip_size_kb": 64, 00:11:47.537 "state": "online", 00:11:47.537 "raid_level": "concat", 00:11:47.537 "superblock": true, 00:11:47.537 "num_base_bdevs": 3, 00:11:47.537 "num_base_bdevs_discovered": 3, 00:11:47.537 "num_base_bdevs_operational": 3, 00:11:47.537 "base_bdevs_list": [ 00:11:47.537 { 00:11:47.537 "name": "pt1", 00:11:47.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:47.537 "is_configured": true, 00:11:47.537 "data_offset": 2048, 00:11:47.537 "data_size": 63488 00:11:47.537 }, 00:11:47.537 { 00:11:47.537 "name": "pt2", 00:11:47.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:47.537 "is_configured": true, 00:11:47.537 "data_offset": 2048, 00:11:47.537 "data_size": 63488 00:11:47.537 }, 00:11:47.537 { 00:11:47.537 "name": "pt3", 00:11:47.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:47.537 "is_configured": true, 00:11:47.537 "data_offset": 2048, 00:11:47.537 "data_size": 63488 00:11:47.537 } 00:11:47.537 ] 00:11:47.537 }' 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.537 14:37:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.135 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:48.135 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:48.135 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.135 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.135 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.135 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.136 [2024-11-04 14:37:47.076882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.136 "name": "raid_bdev1", 00:11:48.136 "aliases": [ 00:11:48.136 "7ad044db-7fd4-4af6-9cad-c82dceff3966" 00:11:48.136 ], 00:11:48.136 "product_name": "Raid Volume", 00:11:48.136 "block_size": 512, 00:11:48.136 "num_blocks": 190464, 00:11:48.136 "uuid": "7ad044db-7fd4-4af6-9cad-c82dceff3966", 00:11:48.136 "assigned_rate_limits": { 00:11:48.136 "rw_ios_per_sec": 0, 00:11:48.136 "rw_mbytes_per_sec": 0, 00:11:48.136 "r_mbytes_per_sec": 0, 00:11:48.136 "w_mbytes_per_sec": 0 00:11:48.136 }, 00:11:48.136 "claimed": false, 00:11:48.136 "zoned": false, 00:11:48.136 "supported_io_types": { 00:11:48.136 "read": true, 00:11:48.136 "write": true, 00:11:48.136 "unmap": true, 00:11:48.136 "flush": true, 00:11:48.136 "reset": true, 00:11:48.136 "nvme_admin": false, 00:11:48.136 "nvme_io": false, 00:11:48.136 "nvme_io_md": false, 00:11:48.136 "write_zeroes": true, 00:11:48.136 "zcopy": false, 00:11:48.136 "get_zone_info": false, 00:11:48.136 "zone_management": false, 00:11:48.136 "zone_append": false, 00:11:48.136 "compare": false, 00:11:48.136 "compare_and_write": false, 00:11:48.136 "abort": false, 00:11:48.136 "seek_hole": false, 00:11:48.136 "seek_data": false, 00:11:48.136 "copy": false, 00:11:48.136 "nvme_iov_md": false 00:11:48.136 }, 00:11:48.136 "memory_domains": [ 00:11:48.136 { 00:11:48.136 "dma_device_id": "system", 00:11:48.136 "dma_device_type": 1 00:11:48.136 }, 00:11:48.136 { 00:11:48.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.136 "dma_device_type": 2 00:11:48.136 }, 00:11:48.136 { 00:11:48.136 "dma_device_id": "system", 00:11:48.136 "dma_device_type": 1 00:11:48.136 }, 00:11:48.136 { 00:11:48.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.136 "dma_device_type": 2 00:11:48.136 }, 00:11:48.136 { 00:11:48.136 "dma_device_id": "system", 00:11:48.136 "dma_device_type": 1 00:11:48.136 }, 00:11:48.136 { 00:11:48.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.136 "dma_device_type": 2 00:11:48.136 } 00:11:48.136 ], 00:11:48.136 "driver_specific": { 00:11:48.136 "raid": { 00:11:48.136 "uuid": "7ad044db-7fd4-4af6-9cad-c82dceff3966", 00:11:48.136 "strip_size_kb": 64, 00:11:48.136 "state": "online", 00:11:48.136 "raid_level": "concat", 00:11:48.136 "superblock": true, 00:11:48.136 "num_base_bdevs": 3, 00:11:48.136 "num_base_bdevs_discovered": 3, 00:11:48.136 "num_base_bdevs_operational": 3, 00:11:48.136 "base_bdevs_list": [ 00:11:48.136 { 00:11:48.136 "name": "pt1", 00:11:48.136 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.136 "is_configured": true, 00:11:48.136 "data_offset": 2048, 00:11:48.136 "data_size": 63488 00:11:48.136 }, 00:11:48.136 { 00:11:48.136 "name": "pt2", 00:11:48.136 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.136 "is_configured": true, 00:11:48.136 "data_offset": 2048, 00:11:48.136 "data_size": 63488 00:11:48.136 }, 00:11:48.136 { 00:11:48.136 "name": "pt3", 00:11:48.136 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.136 "is_configured": true, 00:11:48.136 "data_offset": 2048, 00:11:48.136 "data_size": 63488 00:11:48.136 } 00:11:48.136 ] 00:11:48.136 } 00:11:48.136 } 00:11:48.136 }' 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:48.136 pt2 00:11:48.136 pt3' 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.136 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:48.396 [2024-11-04 14:37:47.384872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7ad044db-7fd4-4af6-9cad-c82dceff3966 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7ad044db-7fd4-4af6-9cad-c82dceff3966 ']' 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 [2024-11-04 14:37:47.436562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.396 [2024-11-04 14:37:47.436594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.396 [2024-11-04 14:37:47.436692] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.396 [2024-11-04 14:37:47.436771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.396 [2024-11-04 14:37:47.436786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.396 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:48.655 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.655 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:48.655 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:48.655 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:48.655 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:48.655 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.656 [2024-11-04 14:37:47.572699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:48.656 [2024-11-04 14:37:47.575184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:48.656 [2024-11-04 14:37:47.575263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:48.656 [2024-11-04 14:37:47.575335] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:48.656 [2024-11-04 14:37:47.575408] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:48.656 [2024-11-04 14:37:47.575442] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:48.656 [2024-11-04 14:37:47.575468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.656 [2024-11-04 14:37:47.575481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:48.656 request: 00:11:48.656 { 00:11:48.656 "name": "raid_bdev1", 00:11:48.656 "raid_level": "concat", 00:11:48.656 "base_bdevs": [ 00:11:48.656 "malloc1", 00:11:48.656 "malloc2", 00:11:48.656 "malloc3" 00:11:48.656 ], 00:11:48.656 "strip_size_kb": 64, 00:11:48.656 "superblock": false, 00:11:48.656 "method": "bdev_raid_create", 00:11:48.656 "req_id": 1 00:11:48.656 } 00:11:48.656 Got JSON-RPC error response 00:11:48.656 response: 00:11:48.656 { 00:11:48.656 "code": -17, 00:11:48.656 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:48.656 } 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.656 [2024-11-04 14:37:47.640641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:48.656 [2024-11-04 14:37:47.640714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.656 [2024-11-04 14:37:47.640746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:48.656 [2024-11-04 14:37:47.640761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.656 [2024-11-04 14:37:47.643690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.656 [2024-11-04 14:37:47.643735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:48.656 [2024-11-04 14:37:47.643844] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:48.656 [2024-11-04 14:37:47.643920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:48.656 pt1 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.656 "name": "raid_bdev1", 00:11:48.656 "uuid": "7ad044db-7fd4-4af6-9cad-c82dceff3966", 00:11:48.656 "strip_size_kb": 64, 00:11:48.656 "state": "configuring", 00:11:48.656 "raid_level": "concat", 00:11:48.656 "superblock": true, 00:11:48.656 "num_base_bdevs": 3, 00:11:48.656 "num_base_bdevs_discovered": 1, 00:11:48.656 "num_base_bdevs_operational": 3, 00:11:48.656 "base_bdevs_list": [ 00:11:48.656 { 00:11:48.656 "name": "pt1", 00:11:48.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:48.656 "is_configured": true, 00:11:48.656 "data_offset": 2048, 00:11:48.656 "data_size": 63488 00:11:48.656 }, 00:11:48.656 { 00:11:48.656 "name": null, 00:11:48.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:48.656 "is_configured": false, 00:11:48.656 "data_offset": 2048, 00:11:48.656 "data_size": 63488 00:11:48.656 }, 00:11:48.656 { 00:11:48.656 "name": null, 00:11:48.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:48.656 "is_configured": false, 00:11:48.656 "data_offset": 2048, 00:11:48.656 "data_size": 63488 00:11:48.656 } 00:11:48.656 ] 00:11:48.656 }' 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.656 14:37:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.223 [2024-11-04 14:37:48.152792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.223 [2024-11-04 14:37:48.152872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.223 [2024-11-04 14:37:48.152906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:49.223 [2024-11-04 14:37:48.152922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.223 [2024-11-04 14:37:48.153501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.223 [2024-11-04 14:37:48.153532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.223 [2024-11-04 14:37:48.153640] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.223 [2024-11-04 14:37:48.153673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.223 pt2 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.223 [2024-11-04 14:37:48.160798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.223 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.223 "name": "raid_bdev1", 00:11:49.223 "uuid": "7ad044db-7fd4-4af6-9cad-c82dceff3966", 00:11:49.223 "strip_size_kb": 64, 00:11:49.223 "state": "configuring", 00:11:49.223 "raid_level": "concat", 00:11:49.223 "superblock": true, 00:11:49.223 "num_base_bdevs": 3, 00:11:49.223 "num_base_bdevs_discovered": 1, 00:11:49.223 "num_base_bdevs_operational": 3, 00:11:49.223 "base_bdevs_list": [ 00:11:49.223 { 00:11:49.223 "name": "pt1", 00:11:49.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.223 "is_configured": true, 00:11:49.223 "data_offset": 2048, 00:11:49.223 "data_size": 63488 00:11:49.223 }, 00:11:49.223 { 00:11:49.223 "name": null, 00:11:49.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.223 "is_configured": false, 00:11:49.223 "data_offset": 0, 00:11:49.223 "data_size": 63488 00:11:49.223 }, 00:11:49.223 { 00:11:49.223 "name": null, 00:11:49.224 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.224 "is_configured": false, 00:11:49.224 "data_offset": 2048, 00:11:49.224 "data_size": 63488 00:11:49.224 } 00:11:49.224 ] 00:11:49.224 }' 00:11:49.224 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.224 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.789 [2024-11-04 14:37:48.680887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.789 [2024-11-04 14:37:48.680977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.789 [2024-11-04 14:37:48.681005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:49.789 [2024-11-04 14:37:48.681024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.789 [2024-11-04 14:37:48.681602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.789 [2024-11-04 14:37:48.681641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.789 [2024-11-04 14:37:48.681742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:49.789 [2024-11-04 14:37:48.681790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.789 pt2 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.789 [2024-11-04 14:37:48.688863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:49.789 [2024-11-04 14:37:48.688917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.789 [2024-11-04 14:37:48.688951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:49.789 [2024-11-04 14:37:48.688968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.789 [2024-11-04 14:37:48.689417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.789 [2024-11-04 14:37:48.689457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:49.789 [2024-11-04 14:37:48.689535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:49.789 [2024-11-04 14:37:48.689568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:49.789 [2024-11-04 14:37:48.689726] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:49.789 [2024-11-04 14:37:48.689752] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:49.789 [2024-11-04 14:37:48.690086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:49.789 [2024-11-04 14:37:48.690268] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:49.789 [2024-11-04 14:37:48.690293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:49.789 [2024-11-04 14:37:48.690465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.789 pt3 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.789 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.790 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.790 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.790 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.790 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.790 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.790 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.790 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.790 "name": "raid_bdev1", 00:11:49.790 "uuid": "7ad044db-7fd4-4af6-9cad-c82dceff3966", 00:11:49.790 "strip_size_kb": 64, 00:11:49.790 "state": "online", 00:11:49.790 "raid_level": "concat", 00:11:49.790 "superblock": true, 00:11:49.790 "num_base_bdevs": 3, 00:11:49.790 "num_base_bdevs_discovered": 3, 00:11:49.790 "num_base_bdevs_operational": 3, 00:11:49.790 "base_bdevs_list": [ 00:11:49.790 { 00:11:49.790 "name": "pt1", 00:11:49.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.790 "is_configured": true, 00:11:49.790 "data_offset": 2048, 00:11:49.790 "data_size": 63488 00:11:49.790 }, 00:11:49.790 { 00:11:49.790 "name": "pt2", 00:11:49.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.790 "is_configured": true, 00:11:49.790 "data_offset": 2048, 00:11:49.790 "data_size": 63488 00:11:49.790 }, 00:11:49.790 { 00:11:49.790 "name": "pt3", 00:11:49.790 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:49.790 "is_configured": true, 00:11:49.790 "data_offset": 2048, 00:11:49.790 "data_size": 63488 00:11:49.790 } 00:11:49.790 ] 00:11:49.790 }' 00:11:49.790 14:37:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.790 14:37:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.356 [2024-11-04 14:37:49.205415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.356 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.356 "name": "raid_bdev1", 00:11:50.356 "aliases": [ 00:11:50.356 "7ad044db-7fd4-4af6-9cad-c82dceff3966" 00:11:50.356 ], 00:11:50.356 "product_name": "Raid Volume", 00:11:50.356 "block_size": 512, 00:11:50.356 "num_blocks": 190464, 00:11:50.356 "uuid": "7ad044db-7fd4-4af6-9cad-c82dceff3966", 00:11:50.356 "assigned_rate_limits": { 00:11:50.356 "rw_ios_per_sec": 0, 00:11:50.356 "rw_mbytes_per_sec": 0, 00:11:50.356 "r_mbytes_per_sec": 0, 00:11:50.356 "w_mbytes_per_sec": 0 00:11:50.356 }, 00:11:50.356 "claimed": false, 00:11:50.356 "zoned": false, 00:11:50.356 "supported_io_types": { 00:11:50.356 "read": true, 00:11:50.356 "write": true, 00:11:50.356 "unmap": true, 00:11:50.356 "flush": true, 00:11:50.356 "reset": true, 00:11:50.356 "nvme_admin": false, 00:11:50.356 "nvme_io": false, 00:11:50.356 "nvme_io_md": false, 00:11:50.356 "write_zeroes": true, 00:11:50.356 "zcopy": false, 00:11:50.356 "get_zone_info": false, 00:11:50.356 "zone_management": false, 00:11:50.356 "zone_append": false, 00:11:50.356 "compare": false, 00:11:50.356 "compare_and_write": false, 00:11:50.356 "abort": false, 00:11:50.356 "seek_hole": false, 00:11:50.356 "seek_data": false, 00:11:50.356 "copy": false, 00:11:50.356 "nvme_iov_md": false 00:11:50.356 }, 00:11:50.356 "memory_domains": [ 00:11:50.356 { 00:11:50.356 "dma_device_id": "system", 00:11:50.356 "dma_device_type": 1 00:11:50.356 }, 00:11:50.356 { 00:11:50.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.356 "dma_device_type": 2 00:11:50.356 }, 00:11:50.356 { 00:11:50.356 "dma_device_id": "system", 00:11:50.356 "dma_device_type": 1 00:11:50.356 }, 00:11:50.356 { 00:11:50.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.356 "dma_device_type": 2 00:11:50.356 }, 00:11:50.356 { 00:11:50.356 "dma_device_id": "system", 00:11:50.356 "dma_device_type": 1 00:11:50.356 }, 00:11:50.356 { 00:11:50.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.356 "dma_device_type": 2 00:11:50.356 } 00:11:50.356 ], 00:11:50.356 "driver_specific": { 00:11:50.356 "raid": { 00:11:50.356 "uuid": "7ad044db-7fd4-4af6-9cad-c82dceff3966", 00:11:50.356 "strip_size_kb": 64, 00:11:50.356 "state": "online", 00:11:50.356 "raid_level": "concat", 00:11:50.356 "superblock": true, 00:11:50.356 "num_base_bdevs": 3, 00:11:50.356 "num_base_bdevs_discovered": 3, 00:11:50.356 "num_base_bdevs_operational": 3, 00:11:50.356 "base_bdevs_list": [ 00:11:50.356 { 00:11:50.356 "name": "pt1", 00:11:50.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.356 "is_configured": true, 00:11:50.356 "data_offset": 2048, 00:11:50.356 "data_size": 63488 00:11:50.356 }, 00:11:50.356 { 00:11:50.356 "name": "pt2", 00:11:50.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.356 "is_configured": true, 00:11:50.356 "data_offset": 2048, 00:11:50.356 "data_size": 63488 00:11:50.356 }, 00:11:50.356 { 00:11:50.356 "name": "pt3", 00:11:50.356 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.356 "is_configured": true, 00:11:50.356 "data_offset": 2048, 00:11:50.356 "data_size": 63488 00:11:50.356 } 00:11:50.356 ] 00:11:50.356 } 00:11:50.356 } 00:11:50.356 }' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:50.357 pt2 00:11:50.357 pt3' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.357 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.616 [2024-11-04 14:37:49.485436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7ad044db-7fd4-4af6-9cad-c82dceff3966 '!=' 7ad044db-7fd4-4af6-9cad-c82dceff3966 ']' 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66887 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 66887 ']' 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 66887 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66887 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:50.616 killing process with pid 66887 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66887' 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 66887 00:11:50.616 [2024-11-04 14:37:49.546026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:50.616 14:37:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 66887 00:11:50.616 [2024-11-04 14:37:49.546141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:50.616 [2024-11-04 14:37:49.546216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:50.616 [2024-11-04 14:37:49.546245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:50.874 [2024-11-04 14:37:49.813284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.808 14:37:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:51.808 00:11:51.808 real 0m5.531s 00:11:51.808 user 0m8.361s 00:11:51.808 sys 0m0.785s 00:11:51.808 14:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:51.808 ************************************ 00:11:51.808 14:37:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.808 END TEST raid_superblock_test 00:11:51.808 ************************************ 00:11:51.808 14:37:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:51.808 14:37:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:51.808 14:37:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:51.808 14:37:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.808 ************************************ 00:11:51.808 START TEST raid_read_error_test 00:11:51.808 ************************************ 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kcSDItcC5r 00:11:51.808 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67151 00:11:51.809 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:51.809 14:37:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67151 00:11:51.809 14:37:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67151 ']' 00:11:51.809 14:37:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.809 14:37:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:51.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.809 14:37:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.809 14:37:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:51.809 14:37:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.068 [2024-11-04 14:37:50.993336] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:11:52.068 [2024-11-04 14:37:50.993491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67151 ] 00:11:52.068 [2024-11-04 14:37:51.161626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.327 [2024-11-04 14:37:51.288650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.585 [2024-11-04 14:37:51.489447] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:52.586 [2024-11-04 14:37:51.489527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.153 14:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:53.153 14:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:53.153 14:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.153 14:37:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:53.153 14:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.153 14:37:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.153 BaseBdev1_malloc 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.153 true 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.153 [2024-11-04 14:37:52.032585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:53.153 [2024-11-04 14:37:52.032648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.153 [2024-11-04 14:37:52.032677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:53.153 [2024-11-04 14:37:52.032695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.153 [2024-11-04 14:37:52.035519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.153 [2024-11-04 14:37:52.035567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:53.153 BaseBdev1 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.153 BaseBdev2_malloc 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.153 true 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.153 [2024-11-04 14:37:52.088425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:53.153 [2024-11-04 14:37:52.088494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.153 [2024-11-04 14:37:52.088522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:53.153 [2024-11-04 14:37:52.088539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.153 [2024-11-04 14:37:52.091491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.153 [2024-11-04 14:37:52.091540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:53.153 BaseBdev2 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.153 BaseBdev3_malloc 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.153 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.154 true 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.154 [2024-11-04 14:37:52.157527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:53.154 [2024-11-04 14:37:52.157588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.154 [2024-11-04 14:37:52.157614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:53.154 [2024-11-04 14:37:52.157631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.154 [2024-11-04 14:37:52.160480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.154 [2024-11-04 14:37:52.160543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:53.154 BaseBdev3 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.154 [2024-11-04 14:37:52.165620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:53.154 [2024-11-04 14:37:52.168099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.154 [2024-11-04 14:37:52.168216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.154 [2024-11-04 14:37:52.168528] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:53.154 [2024-11-04 14:37:52.168568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:53.154 [2024-11-04 14:37:52.168997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:53.154 [2024-11-04 14:37:52.169216] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:53.154 [2024-11-04 14:37:52.169239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:53.154 [2024-11-04 14:37:52.169559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.154 "name": "raid_bdev1", 00:11:53.154 "uuid": "efeb0457-30a4-4c23-8fa4-e10baeaf7b02", 00:11:53.154 "strip_size_kb": 64, 00:11:53.154 "state": "online", 00:11:53.154 "raid_level": "concat", 00:11:53.154 "superblock": true, 00:11:53.154 "num_base_bdevs": 3, 00:11:53.154 "num_base_bdevs_discovered": 3, 00:11:53.154 "num_base_bdevs_operational": 3, 00:11:53.154 "base_bdevs_list": [ 00:11:53.154 { 00:11:53.154 "name": "BaseBdev1", 00:11:53.154 "uuid": "0449ded3-10ea-5435-a13f-de5493a6d2cf", 00:11:53.154 "is_configured": true, 00:11:53.154 "data_offset": 2048, 00:11:53.154 "data_size": 63488 00:11:53.154 }, 00:11:53.154 { 00:11:53.154 "name": "BaseBdev2", 00:11:53.154 "uuid": "88625935-3031-5f19-babd-5279de5f7861", 00:11:53.154 "is_configured": true, 00:11:53.154 "data_offset": 2048, 00:11:53.154 "data_size": 63488 00:11:53.154 }, 00:11:53.154 { 00:11:53.154 "name": "BaseBdev3", 00:11:53.154 "uuid": "e627ea2b-702b-5c1f-97bd-7843ae7f1e96", 00:11:53.154 "is_configured": true, 00:11:53.154 "data_offset": 2048, 00:11:53.154 "data_size": 63488 00:11:53.154 } 00:11:53.154 ] 00:11:53.154 }' 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.154 14:37:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.720 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:53.720 14:37:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:53.720 [2024-11-04 14:37:52.811183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.679 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.679 "name": "raid_bdev1", 00:11:54.679 "uuid": "efeb0457-30a4-4c23-8fa4-e10baeaf7b02", 00:11:54.680 "strip_size_kb": 64, 00:11:54.680 "state": "online", 00:11:54.680 "raid_level": "concat", 00:11:54.680 "superblock": true, 00:11:54.680 "num_base_bdevs": 3, 00:11:54.680 "num_base_bdevs_discovered": 3, 00:11:54.680 "num_base_bdevs_operational": 3, 00:11:54.680 "base_bdevs_list": [ 00:11:54.680 { 00:11:54.680 "name": "BaseBdev1", 00:11:54.680 "uuid": "0449ded3-10ea-5435-a13f-de5493a6d2cf", 00:11:54.680 "is_configured": true, 00:11:54.680 "data_offset": 2048, 00:11:54.680 "data_size": 63488 00:11:54.680 }, 00:11:54.680 { 00:11:54.680 "name": "BaseBdev2", 00:11:54.680 "uuid": "88625935-3031-5f19-babd-5279de5f7861", 00:11:54.680 "is_configured": true, 00:11:54.680 "data_offset": 2048, 00:11:54.680 "data_size": 63488 00:11:54.680 }, 00:11:54.680 { 00:11:54.680 "name": "BaseBdev3", 00:11:54.680 "uuid": "e627ea2b-702b-5c1f-97bd-7843ae7f1e96", 00:11:54.680 "is_configured": true, 00:11:54.680 "data_offset": 2048, 00:11:54.680 "data_size": 63488 00:11:54.680 } 00:11:54.680 ] 00:11:54.680 }' 00:11:54.680 14:37:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.680 14:37:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.246 14:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.246 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.246 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.246 [2024-11-04 14:37:54.267158] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.246 [2024-11-04 14:37:54.267201] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.246 [2024-11-04 14:37:54.270577] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.246 [2024-11-04 14:37:54.270641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.246 [2024-11-04 14:37:54.270696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.246 [2024-11-04 14:37:54.270714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:55.246 { 00:11:55.246 "results": [ 00:11:55.246 { 00:11:55.246 "job": "raid_bdev1", 00:11:55.246 "core_mask": "0x1", 00:11:55.246 "workload": "randrw", 00:11:55.246 "percentage": 50, 00:11:55.246 "status": "finished", 00:11:55.246 "queue_depth": 1, 00:11:55.246 "io_size": 131072, 00:11:55.246 "runtime": 1.453536, 00:11:55.246 "iops": 10678.78607753781, 00:11:55.246 "mibps": 1334.8482596922263, 00:11:55.246 "io_failed": 1, 00:11:55.246 "io_timeout": 0, 00:11:55.246 "avg_latency_us": 130.7176227650466, 00:11:55.246 "min_latency_us": 39.56363636363636, 00:11:55.246 "max_latency_us": 1854.370909090909 00:11:55.246 } 00:11:55.246 ], 00:11:55.246 "core_count": 1 00:11:55.246 } 00:11:55.246 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.246 14:37:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67151 00:11:55.247 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67151 ']' 00:11:55.247 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67151 00:11:55.247 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:55.247 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:55.247 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67151 00:11:55.247 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:55.247 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:55.247 killing process with pid 67151 00:11:55.247 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67151' 00:11:55.247 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67151 00:11:55.247 14:37:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67151 00:11:55.247 [2024-11-04 14:37:54.313459] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.536 [2024-11-04 14:37:54.520050] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.912 14:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kcSDItcC5r 00:11:56.912 14:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:56.912 14:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:56.912 14:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:11:56.912 14:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:56.912 14:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:56.912 14:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:56.912 14:37:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:11:56.912 00:11:56.912 real 0m4.734s 00:11:56.912 user 0m5.948s 00:11:56.912 sys 0m0.569s 00:11:56.912 14:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:56.912 14:37:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.912 ************************************ 00:11:56.912 END TEST raid_read_error_test 00:11:56.912 ************************************ 00:11:56.912 14:37:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:56.912 14:37:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:56.912 14:37:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.912 14:37:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.912 ************************************ 00:11:56.912 START TEST raid_write_error_test 00:11:56.912 ************************************ 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.912 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.V5bk2KIlD3 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67292 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67292 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67292 ']' 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:56.913 14:37:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.913 [2024-11-04 14:37:55.775328] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:11:56.913 [2024-11-04 14:37:55.775499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67292 ] 00:11:56.913 [2024-11-04 14:37:55.962978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.170 [2024-11-04 14:37:56.089311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.452 [2024-11-04 14:37:56.294125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.452 [2024-11-04 14:37:56.294166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.711 BaseBdev1_malloc 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.711 true 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.711 [2024-11-04 14:37:56.778111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:57.711 [2024-11-04 14:37:56.778368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.711 [2024-11-04 14:37:56.778427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:57.711 [2024-11-04 14:37:56.778480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.711 [2024-11-04 14:37:56.781498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.711 [2024-11-04 14:37:56.781714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.711 BaseBdev1 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.711 BaseBdev2_malloc 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.711 true 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.711 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 [2024-11-04 14:37:56.837045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:57.969 [2024-11-04 14:37:56.837153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.969 [2024-11-04 14:37:56.837194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:57.969 [2024-11-04 14:37:56.837223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.969 [2024-11-04 14:37:56.840306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.969 [2024-11-04 14:37:56.840368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.969 BaseBdev2 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 BaseBdev3_malloc 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 true 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.969 [2024-11-04 14:37:56.914899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:57.969 [2024-11-04 14:37:56.915005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.969 [2024-11-04 14:37:56.915046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:57.969 [2024-11-04 14:37:56.915080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.969 [2024-11-04 14:37:56.918020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.969 [2024-11-04 14:37:56.918070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:57.969 BaseBdev3 00:11:57.969 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.970 [2024-11-04 14:37:56.923020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.970 [2024-11-04 14:37:56.925484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.970 [2024-11-04 14:37:56.925605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.970 [2024-11-04 14:37:56.925896] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:57.970 [2024-11-04 14:37:56.925914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:57.970 [2024-11-04 14:37:56.926248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:57.970 [2024-11-04 14:37:56.926504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:57.970 [2024-11-04 14:37:56.926531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:57.970 [2024-11-04 14:37:56.926704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.970 "name": "raid_bdev1", 00:11:57.970 "uuid": "e9fe35ca-2928-43ce-8fae-b2149cd64011", 00:11:57.970 "strip_size_kb": 64, 00:11:57.970 "state": "online", 00:11:57.970 "raid_level": "concat", 00:11:57.970 "superblock": true, 00:11:57.970 "num_base_bdevs": 3, 00:11:57.970 "num_base_bdevs_discovered": 3, 00:11:57.970 "num_base_bdevs_operational": 3, 00:11:57.970 "base_bdevs_list": [ 00:11:57.970 { 00:11:57.970 "name": "BaseBdev1", 00:11:57.970 "uuid": "5e85e192-39ba-5913-bf20-d0d1b4cc0d81", 00:11:57.970 "is_configured": true, 00:11:57.970 "data_offset": 2048, 00:11:57.970 "data_size": 63488 00:11:57.970 }, 00:11:57.970 { 00:11:57.970 "name": "BaseBdev2", 00:11:57.970 "uuid": "1bedd50f-3f6e-5c28-b89b-639336d8303d", 00:11:57.970 "is_configured": true, 00:11:57.970 "data_offset": 2048, 00:11:57.970 "data_size": 63488 00:11:57.970 }, 00:11:57.970 { 00:11:57.970 "name": "BaseBdev3", 00:11:57.970 "uuid": "fbcd1a83-e3b5-5cf2-8c83-b77a832b7115", 00:11:57.970 "is_configured": true, 00:11:57.970 "data_offset": 2048, 00:11:57.970 "data_size": 63488 00:11:57.970 } 00:11:57.970 ] 00:11:57.970 }' 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.970 14:37:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.536 14:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:58.536 14:37:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.536 [2024-11-04 14:37:57.568779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.470 "name": "raid_bdev1", 00:11:59.470 "uuid": "e9fe35ca-2928-43ce-8fae-b2149cd64011", 00:11:59.470 "strip_size_kb": 64, 00:11:59.470 "state": "online", 00:11:59.470 "raid_level": "concat", 00:11:59.470 "superblock": true, 00:11:59.470 "num_base_bdevs": 3, 00:11:59.470 "num_base_bdevs_discovered": 3, 00:11:59.470 "num_base_bdevs_operational": 3, 00:11:59.470 "base_bdevs_list": [ 00:11:59.470 { 00:11:59.470 "name": "BaseBdev1", 00:11:59.470 "uuid": "5e85e192-39ba-5913-bf20-d0d1b4cc0d81", 00:11:59.470 "is_configured": true, 00:11:59.470 "data_offset": 2048, 00:11:59.470 "data_size": 63488 00:11:59.470 }, 00:11:59.470 { 00:11:59.470 "name": "BaseBdev2", 00:11:59.470 "uuid": "1bedd50f-3f6e-5c28-b89b-639336d8303d", 00:11:59.470 "is_configured": true, 00:11:59.470 "data_offset": 2048, 00:11:59.470 "data_size": 63488 00:11:59.470 }, 00:11:59.470 { 00:11:59.470 "name": "BaseBdev3", 00:11:59.470 "uuid": "fbcd1a83-e3b5-5cf2-8c83-b77a832b7115", 00:11:59.470 "is_configured": true, 00:11:59.470 "data_offset": 2048, 00:11:59.470 "data_size": 63488 00:11:59.470 } 00:11:59.470 ] 00:11:59.470 }' 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.470 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.036 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:00.036 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.036 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.036 [2024-11-04 14:37:58.987752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:00.036 [2024-11-04 14:37:58.987788] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.036 [2024-11-04 14:37:58.991360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.036 [2024-11-04 14:37:58.991434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.036 [2024-11-04 14:37:58.991485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.036 [2024-11-04 14:37:58.991501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:00.036 { 00:12:00.036 "results": [ 00:12:00.036 { 00:12:00.036 "job": "raid_bdev1", 00:12:00.036 "core_mask": "0x1", 00:12:00.036 "workload": "randrw", 00:12:00.036 "percentage": 50, 00:12:00.036 "status": "finished", 00:12:00.036 "queue_depth": 1, 00:12:00.036 "io_size": 131072, 00:12:00.036 "runtime": 1.416364, 00:12:00.036 "iops": 10901.85856178214, 00:12:00.036 "mibps": 1362.7323202227676, 00:12:00.036 "io_failed": 1, 00:12:00.036 "io_timeout": 0, 00:12:00.036 "avg_latency_us": 128.079793714898, 00:12:00.036 "min_latency_us": 37.70181818181818, 00:12:00.036 "max_latency_us": 1899.0545454545454 00:12:00.036 } 00:12:00.036 ], 00:12:00.036 "core_count": 1 00:12:00.036 } 00:12:00.036 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.036 14:37:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67292 00:12:00.036 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67292 ']' 00:12:00.036 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67292 00:12:00.036 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:00.036 14:37:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:00.036 14:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67292 00:12:00.036 14:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:00.036 killing process with pid 67292 00:12:00.036 14:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:00.036 14:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67292' 00:12:00.036 14:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67292 00:12:00.036 [2024-11-04 14:37:59.024916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:00.036 14:37:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67292 00:12:00.294 [2024-11-04 14:37:59.229061] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:01.229 14:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.V5bk2KIlD3 00:12:01.229 14:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:01.229 14:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:01.229 14:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:01.229 14:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:01.229 14:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:01.229 14:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:01.229 14:38:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:01.229 00:12:01.229 real 0m4.656s 00:12:01.229 user 0m5.779s 00:12:01.229 sys 0m0.552s 00:12:01.229 14:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.229 14:38:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.229 ************************************ 00:12:01.229 END TEST raid_write_error_test 00:12:01.229 ************************************ 00:12:01.487 14:38:00 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:01.487 14:38:00 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:12:01.487 14:38:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:01.487 14:38:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.487 14:38:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:01.487 ************************************ 00:12:01.487 START TEST raid_state_function_test 00:12:01.487 ************************************ 00:12:01.487 14:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:12:01.487 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:01.487 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:01.487 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67436 00:12:01.488 Process raid pid: 67436 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67436' 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67436 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:01.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67436 ']' 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:01.488 14:38:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.488 [2024-11-04 14:38:00.481501] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:12:01.488 [2024-11-04 14:38:00.482030] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.746 [2024-11-04 14:38:00.666737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.746 [2024-11-04 14:38:00.798297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.003 [2024-11-04 14:38:01.006923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.003 [2024-11-04 14:38:01.007163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.570 [2024-11-04 14:38:01.500243] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.570 [2024-11-04 14:38:01.500355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.570 [2024-11-04 14:38:01.500372] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.570 [2024-11-04 14:38:01.500387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.570 [2024-11-04 14:38:01.500396] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.570 [2024-11-04 14:38:01.500409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.570 "name": "Existed_Raid", 00:12:02.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.570 "strip_size_kb": 0, 00:12:02.570 "state": "configuring", 00:12:02.570 "raid_level": "raid1", 00:12:02.570 "superblock": false, 00:12:02.570 "num_base_bdevs": 3, 00:12:02.570 "num_base_bdevs_discovered": 0, 00:12:02.570 "num_base_bdevs_operational": 3, 00:12:02.570 "base_bdevs_list": [ 00:12:02.570 { 00:12:02.570 "name": "BaseBdev1", 00:12:02.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.570 "is_configured": false, 00:12:02.570 "data_offset": 0, 00:12:02.570 "data_size": 0 00:12:02.570 }, 00:12:02.570 { 00:12:02.570 "name": "BaseBdev2", 00:12:02.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.570 "is_configured": false, 00:12:02.570 "data_offset": 0, 00:12:02.570 "data_size": 0 00:12:02.570 }, 00:12:02.570 { 00:12:02.570 "name": "BaseBdev3", 00:12:02.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.570 "is_configured": false, 00:12:02.570 "data_offset": 0, 00:12:02.570 "data_size": 0 00:12:02.570 } 00:12:02.570 ] 00:12:02.570 }' 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.570 14:38:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.137 [2024-11-04 14:38:02.024362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.137 [2024-11-04 14:38:02.024406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.137 [2024-11-04 14:38:02.036360] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:03.137 [2024-11-04 14:38:02.036444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:03.137 [2024-11-04 14:38:02.036459] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.137 [2024-11-04 14:38:02.036475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.137 [2024-11-04 14:38:02.036484] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.137 [2024-11-04 14:38:02.036497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.137 [2024-11-04 14:38:02.082353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.137 BaseBdev1 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.137 [ 00:12:03.137 { 00:12:03.137 "name": "BaseBdev1", 00:12:03.137 "aliases": [ 00:12:03.137 "40351c60-7b30-4e87-b2ef-0a9a14fa793e" 00:12:03.137 ], 00:12:03.137 "product_name": "Malloc disk", 00:12:03.137 "block_size": 512, 00:12:03.137 "num_blocks": 65536, 00:12:03.137 "uuid": "40351c60-7b30-4e87-b2ef-0a9a14fa793e", 00:12:03.137 "assigned_rate_limits": { 00:12:03.137 "rw_ios_per_sec": 0, 00:12:03.137 "rw_mbytes_per_sec": 0, 00:12:03.137 "r_mbytes_per_sec": 0, 00:12:03.137 "w_mbytes_per_sec": 0 00:12:03.137 }, 00:12:03.137 "claimed": true, 00:12:03.137 "claim_type": "exclusive_write", 00:12:03.137 "zoned": false, 00:12:03.137 "supported_io_types": { 00:12:03.137 "read": true, 00:12:03.137 "write": true, 00:12:03.137 "unmap": true, 00:12:03.137 "flush": true, 00:12:03.137 "reset": true, 00:12:03.137 "nvme_admin": false, 00:12:03.137 "nvme_io": false, 00:12:03.137 "nvme_io_md": false, 00:12:03.137 "write_zeroes": true, 00:12:03.137 "zcopy": true, 00:12:03.137 "get_zone_info": false, 00:12:03.137 "zone_management": false, 00:12:03.137 "zone_append": false, 00:12:03.137 "compare": false, 00:12:03.137 "compare_and_write": false, 00:12:03.137 "abort": true, 00:12:03.137 "seek_hole": false, 00:12:03.137 "seek_data": false, 00:12:03.137 "copy": true, 00:12:03.137 "nvme_iov_md": false 00:12:03.137 }, 00:12:03.137 "memory_domains": [ 00:12:03.137 { 00:12:03.137 "dma_device_id": "system", 00:12:03.137 "dma_device_type": 1 00:12:03.137 }, 00:12:03.137 { 00:12:03.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.137 "dma_device_type": 2 00:12:03.137 } 00:12:03.137 ], 00:12:03.137 "driver_specific": {} 00:12:03.137 } 00:12:03.137 ] 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.137 "name": "Existed_Raid", 00:12:03.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.137 "strip_size_kb": 0, 00:12:03.137 "state": "configuring", 00:12:03.137 "raid_level": "raid1", 00:12:03.137 "superblock": false, 00:12:03.137 "num_base_bdevs": 3, 00:12:03.137 "num_base_bdevs_discovered": 1, 00:12:03.137 "num_base_bdevs_operational": 3, 00:12:03.137 "base_bdevs_list": [ 00:12:03.137 { 00:12:03.137 "name": "BaseBdev1", 00:12:03.137 "uuid": "40351c60-7b30-4e87-b2ef-0a9a14fa793e", 00:12:03.137 "is_configured": true, 00:12:03.137 "data_offset": 0, 00:12:03.137 "data_size": 65536 00:12:03.137 }, 00:12:03.137 { 00:12:03.137 "name": "BaseBdev2", 00:12:03.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.137 "is_configured": false, 00:12:03.137 "data_offset": 0, 00:12:03.137 "data_size": 0 00:12:03.137 }, 00:12:03.137 { 00:12:03.137 "name": "BaseBdev3", 00:12:03.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.137 "is_configured": false, 00:12:03.137 "data_offset": 0, 00:12:03.137 "data_size": 0 00:12:03.137 } 00:12:03.137 ] 00:12:03.137 }' 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.137 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.704 [2024-11-04 14:38:02.674580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.704 [2024-11-04 14:38:02.674791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.704 [2024-11-04 14:38:02.686623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.704 [2024-11-04 14:38:02.689226] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.704 [2024-11-04 14:38:02.689460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.704 [2024-11-04 14:38:02.689612] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.704 [2024-11-04 14:38:02.689756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.704 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.704 "name": "Existed_Raid", 00:12:03.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.704 "strip_size_kb": 0, 00:12:03.704 "state": "configuring", 00:12:03.705 "raid_level": "raid1", 00:12:03.705 "superblock": false, 00:12:03.705 "num_base_bdevs": 3, 00:12:03.705 "num_base_bdevs_discovered": 1, 00:12:03.705 "num_base_bdevs_operational": 3, 00:12:03.705 "base_bdevs_list": [ 00:12:03.705 { 00:12:03.705 "name": "BaseBdev1", 00:12:03.705 "uuid": "40351c60-7b30-4e87-b2ef-0a9a14fa793e", 00:12:03.705 "is_configured": true, 00:12:03.705 "data_offset": 0, 00:12:03.705 "data_size": 65536 00:12:03.705 }, 00:12:03.705 { 00:12:03.705 "name": "BaseBdev2", 00:12:03.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.705 "is_configured": false, 00:12:03.705 "data_offset": 0, 00:12:03.705 "data_size": 0 00:12:03.705 }, 00:12:03.705 { 00:12:03.705 "name": "BaseBdev3", 00:12:03.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.705 "is_configured": false, 00:12:03.705 "data_offset": 0, 00:12:03.705 "data_size": 0 00:12:03.705 } 00:12:03.705 ] 00:12:03.705 }' 00:12:03.705 14:38:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.705 14:38:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.271 [2024-11-04 14:38:03.238089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.271 BaseBdev2 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.271 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.271 [ 00:12:04.271 { 00:12:04.271 "name": "BaseBdev2", 00:12:04.271 "aliases": [ 00:12:04.271 "8e8fcc88-1034-4098-9f27-c7ea89a8dab2" 00:12:04.271 ], 00:12:04.271 "product_name": "Malloc disk", 00:12:04.271 "block_size": 512, 00:12:04.271 "num_blocks": 65536, 00:12:04.271 "uuid": "8e8fcc88-1034-4098-9f27-c7ea89a8dab2", 00:12:04.271 "assigned_rate_limits": { 00:12:04.271 "rw_ios_per_sec": 0, 00:12:04.271 "rw_mbytes_per_sec": 0, 00:12:04.271 "r_mbytes_per_sec": 0, 00:12:04.271 "w_mbytes_per_sec": 0 00:12:04.271 }, 00:12:04.271 "claimed": true, 00:12:04.271 "claim_type": "exclusive_write", 00:12:04.271 "zoned": false, 00:12:04.271 "supported_io_types": { 00:12:04.271 "read": true, 00:12:04.271 "write": true, 00:12:04.271 "unmap": true, 00:12:04.271 "flush": true, 00:12:04.271 "reset": true, 00:12:04.271 "nvme_admin": false, 00:12:04.271 "nvme_io": false, 00:12:04.271 "nvme_io_md": false, 00:12:04.271 "write_zeroes": true, 00:12:04.271 "zcopy": true, 00:12:04.271 "get_zone_info": false, 00:12:04.271 "zone_management": false, 00:12:04.271 "zone_append": false, 00:12:04.271 "compare": false, 00:12:04.272 "compare_and_write": false, 00:12:04.272 "abort": true, 00:12:04.272 "seek_hole": false, 00:12:04.272 "seek_data": false, 00:12:04.272 "copy": true, 00:12:04.272 "nvme_iov_md": false 00:12:04.272 }, 00:12:04.272 "memory_domains": [ 00:12:04.272 { 00:12:04.272 "dma_device_id": "system", 00:12:04.272 "dma_device_type": 1 00:12:04.272 }, 00:12:04.272 { 00:12:04.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.272 "dma_device_type": 2 00:12:04.272 } 00:12:04.272 ], 00:12:04.272 "driver_specific": {} 00:12:04.272 } 00:12:04.272 ] 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.272 "name": "Existed_Raid", 00:12:04.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.272 "strip_size_kb": 0, 00:12:04.272 "state": "configuring", 00:12:04.272 "raid_level": "raid1", 00:12:04.272 "superblock": false, 00:12:04.272 "num_base_bdevs": 3, 00:12:04.272 "num_base_bdevs_discovered": 2, 00:12:04.272 "num_base_bdevs_operational": 3, 00:12:04.272 "base_bdevs_list": [ 00:12:04.272 { 00:12:04.272 "name": "BaseBdev1", 00:12:04.272 "uuid": "40351c60-7b30-4e87-b2ef-0a9a14fa793e", 00:12:04.272 "is_configured": true, 00:12:04.272 "data_offset": 0, 00:12:04.272 "data_size": 65536 00:12:04.272 }, 00:12:04.272 { 00:12:04.272 "name": "BaseBdev2", 00:12:04.272 "uuid": "8e8fcc88-1034-4098-9f27-c7ea89a8dab2", 00:12:04.272 "is_configured": true, 00:12:04.272 "data_offset": 0, 00:12:04.272 "data_size": 65536 00:12:04.272 }, 00:12:04.272 { 00:12:04.272 "name": "BaseBdev3", 00:12:04.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.272 "is_configured": false, 00:12:04.272 "data_offset": 0, 00:12:04.272 "data_size": 0 00:12:04.272 } 00:12:04.272 ] 00:12:04.272 }' 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.272 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.893 [2024-11-04 14:38:03.839735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.893 [2024-11-04 14:38:03.839793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:04.893 [2024-11-04 14:38:03.839812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:04.893 [2024-11-04 14:38:03.840205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:04.893 [2024-11-04 14:38:03.840456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:04.893 [2024-11-04 14:38:03.840480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:04.893 [2024-11-04 14:38:03.840827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.893 BaseBdev3 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.893 [ 00:12:04.893 { 00:12:04.893 "name": "BaseBdev3", 00:12:04.893 "aliases": [ 00:12:04.893 "61262599-d06e-4a71-b151-3222eac146c9" 00:12:04.893 ], 00:12:04.893 "product_name": "Malloc disk", 00:12:04.893 "block_size": 512, 00:12:04.893 "num_blocks": 65536, 00:12:04.893 "uuid": "61262599-d06e-4a71-b151-3222eac146c9", 00:12:04.893 "assigned_rate_limits": { 00:12:04.893 "rw_ios_per_sec": 0, 00:12:04.893 "rw_mbytes_per_sec": 0, 00:12:04.893 "r_mbytes_per_sec": 0, 00:12:04.893 "w_mbytes_per_sec": 0 00:12:04.893 }, 00:12:04.893 "claimed": true, 00:12:04.893 "claim_type": "exclusive_write", 00:12:04.893 "zoned": false, 00:12:04.893 "supported_io_types": { 00:12:04.893 "read": true, 00:12:04.893 "write": true, 00:12:04.893 "unmap": true, 00:12:04.893 "flush": true, 00:12:04.893 "reset": true, 00:12:04.893 "nvme_admin": false, 00:12:04.893 "nvme_io": false, 00:12:04.893 "nvme_io_md": false, 00:12:04.893 "write_zeroes": true, 00:12:04.893 "zcopy": true, 00:12:04.893 "get_zone_info": false, 00:12:04.893 "zone_management": false, 00:12:04.893 "zone_append": false, 00:12:04.893 "compare": false, 00:12:04.893 "compare_and_write": false, 00:12:04.893 "abort": true, 00:12:04.893 "seek_hole": false, 00:12:04.893 "seek_data": false, 00:12:04.893 "copy": true, 00:12:04.893 "nvme_iov_md": false 00:12:04.893 }, 00:12:04.893 "memory_domains": [ 00:12:04.893 { 00:12:04.893 "dma_device_id": "system", 00:12:04.893 "dma_device_type": 1 00:12:04.893 }, 00:12:04.893 { 00:12:04.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.893 "dma_device_type": 2 00:12:04.893 } 00:12:04.893 ], 00:12:04.893 "driver_specific": {} 00:12:04.893 } 00:12:04.893 ] 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.893 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.894 "name": "Existed_Raid", 00:12:04.894 "uuid": "a9d57305-2a25-4074-a0b1-33280233d44d", 00:12:04.894 "strip_size_kb": 0, 00:12:04.894 "state": "online", 00:12:04.894 "raid_level": "raid1", 00:12:04.894 "superblock": false, 00:12:04.894 "num_base_bdevs": 3, 00:12:04.894 "num_base_bdevs_discovered": 3, 00:12:04.894 "num_base_bdevs_operational": 3, 00:12:04.894 "base_bdevs_list": [ 00:12:04.894 { 00:12:04.894 "name": "BaseBdev1", 00:12:04.894 "uuid": "40351c60-7b30-4e87-b2ef-0a9a14fa793e", 00:12:04.894 "is_configured": true, 00:12:04.894 "data_offset": 0, 00:12:04.894 "data_size": 65536 00:12:04.894 }, 00:12:04.894 { 00:12:04.894 "name": "BaseBdev2", 00:12:04.894 "uuid": "8e8fcc88-1034-4098-9f27-c7ea89a8dab2", 00:12:04.894 "is_configured": true, 00:12:04.894 "data_offset": 0, 00:12:04.894 "data_size": 65536 00:12:04.894 }, 00:12:04.894 { 00:12:04.894 "name": "BaseBdev3", 00:12:04.894 "uuid": "61262599-d06e-4a71-b151-3222eac146c9", 00:12:04.894 "is_configured": true, 00:12:04.894 "data_offset": 0, 00:12:04.894 "data_size": 65536 00:12:04.894 } 00:12:04.894 ] 00:12:04.894 }' 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.894 14:38:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.462 [2024-11-04 14:38:04.404360] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.462 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.462 "name": "Existed_Raid", 00:12:05.462 "aliases": [ 00:12:05.462 "a9d57305-2a25-4074-a0b1-33280233d44d" 00:12:05.462 ], 00:12:05.462 "product_name": "Raid Volume", 00:12:05.462 "block_size": 512, 00:12:05.462 "num_blocks": 65536, 00:12:05.462 "uuid": "a9d57305-2a25-4074-a0b1-33280233d44d", 00:12:05.462 "assigned_rate_limits": { 00:12:05.462 "rw_ios_per_sec": 0, 00:12:05.462 "rw_mbytes_per_sec": 0, 00:12:05.462 "r_mbytes_per_sec": 0, 00:12:05.462 "w_mbytes_per_sec": 0 00:12:05.462 }, 00:12:05.462 "claimed": false, 00:12:05.462 "zoned": false, 00:12:05.462 "supported_io_types": { 00:12:05.462 "read": true, 00:12:05.462 "write": true, 00:12:05.462 "unmap": false, 00:12:05.462 "flush": false, 00:12:05.462 "reset": true, 00:12:05.462 "nvme_admin": false, 00:12:05.462 "nvme_io": false, 00:12:05.462 "nvme_io_md": false, 00:12:05.462 "write_zeroes": true, 00:12:05.462 "zcopy": false, 00:12:05.462 "get_zone_info": false, 00:12:05.462 "zone_management": false, 00:12:05.462 "zone_append": false, 00:12:05.462 "compare": false, 00:12:05.462 "compare_and_write": false, 00:12:05.462 "abort": false, 00:12:05.462 "seek_hole": false, 00:12:05.462 "seek_data": false, 00:12:05.462 "copy": false, 00:12:05.462 "nvme_iov_md": false 00:12:05.462 }, 00:12:05.462 "memory_domains": [ 00:12:05.462 { 00:12:05.462 "dma_device_id": "system", 00:12:05.462 "dma_device_type": 1 00:12:05.462 }, 00:12:05.462 { 00:12:05.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.462 "dma_device_type": 2 00:12:05.462 }, 00:12:05.462 { 00:12:05.462 "dma_device_id": "system", 00:12:05.462 "dma_device_type": 1 00:12:05.462 }, 00:12:05.462 { 00:12:05.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.462 "dma_device_type": 2 00:12:05.462 }, 00:12:05.462 { 00:12:05.462 "dma_device_id": "system", 00:12:05.462 "dma_device_type": 1 00:12:05.462 }, 00:12:05.462 { 00:12:05.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.462 "dma_device_type": 2 00:12:05.462 } 00:12:05.462 ], 00:12:05.462 "driver_specific": { 00:12:05.462 "raid": { 00:12:05.463 "uuid": "a9d57305-2a25-4074-a0b1-33280233d44d", 00:12:05.463 "strip_size_kb": 0, 00:12:05.463 "state": "online", 00:12:05.463 "raid_level": "raid1", 00:12:05.463 "superblock": false, 00:12:05.463 "num_base_bdevs": 3, 00:12:05.463 "num_base_bdevs_discovered": 3, 00:12:05.463 "num_base_bdevs_operational": 3, 00:12:05.463 "base_bdevs_list": [ 00:12:05.463 { 00:12:05.463 "name": "BaseBdev1", 00:12:05.463 "uuid": "40351c60-7b30-4e87-b2ef-0a9a14fa793e", 00:12:05.463 "is_configured": true, 00:12:05.463 "data_offset": 0, 00:12:05.463 "data_size": 65536 00:12:05.463 }, 00:12:05.463 { 00:12:05.463 "name": "BaseBdev2", 00:12:05.463 "uuid": "8e8fcc88-1034-4098-9f27-c7ea89a8dab2", 00:12:05.463 "is_configured": true, 00:12:05.463 "data_offset": 0, 00:12:05.463 "data_size": 65536 00:12:05.463 }, 00:12:05.463 { 00:12:05.463 "name": "BaseBdev3", 00:12:05.463 "uuid": "61262599-d06e-4a71-b151-3222eac146c9", 00:12:05.463 "is_configured": true, 00:12:05.463 "data_offset": 0, 00:12:05.463 "data_size": 65536 00:12:05.463 } 00:12:05.463 ] 00:12:05.463 } 00:12:05.463 } 00:12:05.463 }' 00:12:05.463 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.463 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:05.463 BaseBdev2 00:12:05.463 BaseBdev3' 00:12:05.463 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.463 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.463 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.463 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:05.463 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.463 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.463 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.463 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.722 [2024-11-04 14:38:04.712136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.722 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.723 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.723 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.723 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.982 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.982 "name": "Existed_Raid", 00:12:05.982 "uuid": "a9d57305-2a25-4074-a0b1-33280233d44d", 00:12:05.982 "strip_size_kb": 0, 00:12:05.982 "state": "online", 00:12:05.982 "raid_level": "raid1", 00:12:05.982 "superblock": false, 00:12:05.982 "num_base_bdevs": 3, 00:12:05.982 "num_base_bdevs_discovered": 2, 00:12:05.982 "num_base_bdevs_operational": 2, 00:12:05.982 "base_bdevs_list": [ 00:12:05.982 { 00:12:05.982 "name": null, 00:12:05.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.982 "is_configured": false, 00:12:05.982 "data_offset": 0, 00:12:05.982 "data_size": 65536 00:12:05.982 }, 00:12:05.982 { 00:12:05.982 "name": "BaseBdev2", 00:12:05.982 "uuid": "8e8fcc88-1034-4098-9f27-c7ea89a8dab2", 00:12:05.982 "is_configured": true, 00:12:05.982 "data_offset": 0, 00:12:05.982 "data_size": 65536 00:12:05.982 }, 00:12:05.982 { 00:12:05.982 "name": "BaseBdev3", 00:12:05.982 "uuid": "61262599-d06e-4a71-b151-3222eac146c9", 00:12:05.982 "is_configured": true, 00:12:05.982 "data_offset": 0, 00:12:05.982 "data_size": 65536 00:12:05.982 } 00:12:05.982 ] 00:12:05.982 }' 00:12:05.982 14:38:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.982 14:38:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.240 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.499 [2024-11-04 14:38:05.364390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.499 [2024-11-04 14:38:05.506115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.499 [2024-11-04 14:38:05.506243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.499 [2024-11-04 14:38:05.593638] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.499 [2024-11-04 14:38:05.593717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.499 [2024-11-04 14:38:05.593738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.499 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.758 BaseBdev2 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.758 [ 00:12:06.758 { 00:12:06.758 "name": "BaseBdev2", 00:12:06.758 "aliases": [ 00:12:06.758 "aa53d8be-d023-449a-a555-120c10ecab0b" 00:12:06.758 ], 00:12:06.758 "product_name": "Malloc disk", 00:12:06.758 "block_size": 512, 00:12:06.758 "num_blocks": 65536, 00:12:06.758 "uuid": "aa53d8be-d023-449a-a555-120c10ecab0b", 00:12:06.758 "assigned_rate_limits": { 00:12:06.758 "rw_ios_per_sec": 0, 00:12:06.758 "rw_mbytes_per_sec": 0, 00:12:06.758 "r_mbytes_per_sec": 0, 00:12:06.758 "w_mbytes_per_sec": 0 00:12:06.758 }, 00:12:06.758 "claimed": false, 00:12:06.758 "zoned": false, 00:12:06.758 "supported_io_types": { 00:12:06.758 "read": true, 00:12:06.758 "write": true, 00:12:06.758 "unmap": true, 00:12:06.758 "flush": true, 00:12:06.758 "reset": true, 00:12:06.758 "nvme_admin": false, 00:12:06.758 "nvme_io": false, 00:12:06.758 "nvme_io_md": false, 00:12:06.758 "write_zeroes": true, 00:12:06.758 "zcopy": true, 00:12:06.758 "get_zone_info": false, 00:12:06.758 "zone_management": false, 00:12:06.758 "zone_append": false, 00:12:06.758 "compare": false, 00:12:06.758 "compare_and_write": false, 00:12:06.758 "abort": true, 00:12:06.758 "seek_hole": false, 00:12:06.758 "seek_data": false, 00:12:06.758 "copy": true, 00:12:06.758 "nvme_iov_md": false 00:12:06.758 }, 00:12:06.758 "memory_domains": [ 00:12:06.758 { 00:12:06.758 "dma_device_id": "system", 00:12:06.758 "dma_device_type": 1 00:12:06.758 }, 00:12:06.758 { 00:12:06.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.758 "dma_device_type": 2 00:12:06.758 } 00:12:06.758 ], 00:12:06.758 "driver_specific": {} 00:12:06.758 } 00:12:06.758 ] 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.758 BaseBdev3 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.758 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.758 [ 00:12:06.758 { 00:12:06.758 "name": "BaseBdev3", 00:12:06.758 "aliases": [ 00:12:06.758 "9df42278-c1a6-4f1d-842b-b1d03d01c1ad" 00:12:06.758 ], 00:12:06.758 "product_name": "Malloc disk", 00:12:06.758 "block_size": 512, 00:12:06.758 "num_blocks": 65536, 00:12:06.758 "uuid": "9df42278-c1a6-4f1d-842b-b1d03d01c1ad", 00:12:06.758 "assigned_rate_limits": { 00:12:06.758 "rw_ios_per_sec": 0, 00:12:06.758 "rw_mbytes_per_sec": 0, 00:12:06.758 "r_mbytes_per_sec": 0, 00:12:06.758 "w_mbytes_per_sec": 0 00:12:06.758 }, 00:12:06.758 "claimed": false, 00:12:06.758 "zoned": false, 00:12:06.758 "supported_io_types": { 00:12:06.758 "read": true, 00:12:06.758 "write": true, 00:12:06.758 "unmap": true, 00:12:06.758 "flush": true, 00:12:06.758 "reset": true, 00:12:06.758 "nvme_admin": false, 00:12:06.758 "nvme_io": false, 00:12:06.758 "nvme_io_md": false, 00:12:06.759 "write_zeroes": true, 00:12:06.759 "zcopy": true, 00:12:06.759 "get_zone_info": false, 00:12:06.759 "zone_management": false, 00:12:06.759 "zone_append": false, 00:12:06.759 "compare": false, 00:12:06.759 "compare_and_write": false, 00:12:06.759 "abort": true, 00:12:06.759 "seek_hole": false, 00:12:06.759 "seek_data": false, 00:12:06.759 "copy": true, 00:12:06.759 "nvme_iov_md": false 00:12:06.759 }, 00:12:06.759 "memory_domains": [ 00:12:06.759 { 00:12:06.759 "dma_device_id": "system", 00:12:06.759 "dma_device_type": 1 00:12:06.759 }, 00:12:06.759 { 00:12:06.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.759 "dma_device_type": 2 00:12:06.759 } 00:12:06.759 ], 00:12:06.759 "driver_specific": {} 00:12:06.759 } 00:12:06.759 ] 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.759 [2024-11-04 14:38:05.802595] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:06.759 [2024-11-04 14:38:05.802658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:06.759 [2024-11-04 14:38:05.802686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.759 [2024-11-04 14:38:05.805109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.759 "name": "Existed_Raid", 00:12:06.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.759 "strip_size_kb": 0, 00:12:06.759 "state": "configuring", 00:12:06.759 "raid_level": "raid1", 00:12:06.759 "superblock": false, 00:12:06.759 "num_base_bdevs": 3, 00:12:06.759 "num_base_bdevs_discovered": 2, 00:12:06.759 "num_base_bdevs_operational": 3, 00:12:06.759 "base_bdevs_list": [ 00:12:06.759 { 00:12:06.759 "name": "BaseBdev1", 00:12:06.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.759 "is_configured": false, 00:12:06.759 "data_offset": 0, 00:12:06.759 "data_size": 0 00:12:06.759 }, 00:12:06.759 { 00:12:06.759 "name": "BaseBdev2", 00:12:06.759 "uuid": "aa53d8be-d023-449a-a555-120c10ecab0b", 00:12:06.759 "is_configured": true, 00:12:06.759 "data_offset": 0, 00:12:06.759 "data_size": 65536 00:12:06.759 }, 00:12:06.759 { 00:12:06.759 "name": "BaseBdev3", 00:12:06.759 "uuid": "9df42278-c1a6-4f1d-842b-b1d03d01c1ad", 00:12:06.759 "is_configured": true, 00:12:06.759 "data_offset": 0, 00:12:06.759 "data_size": 65536 00:12:06.759 } 00:12:06.759 ] 00:12:06.759 }' 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.759 14:38:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.326 [2024-11-04 14:38:06.314769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.326 "name": "Existed_Raid", 00:12:07.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.326 "strip_size_kb": 0, 00:12:07.326 "state": "configuring", 00:12:07.326 "raid_level": "raid1", 00:12:07.326 "superblock": false, 00:12:07.326 "num_base_bdevs": 3, 00:12:07.326 "num_base_bdevs_discovered": 1, 00:12:07.326 "num_base_bdevs_operational": 3, 00:12:07.326 "base_bdevs_list": [ 00:12:07.326 { 00:12:07.326 "name": "BaseBdev1", 00:12:07.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.326 "is_configured": false, 00:12:07.326 "data_offset": 0, 00:12:07.326 "data_size": 0 00:12:07.326 }, 00:12:07.326 { 00:12:07.326 "name": null, 00:12:07.326 "uuid": "aa53d8be-d023-449a-a555-120c10ecab0b", 00:12:07.326 "is_configured": false, 00:12:07.326 "data_offset": 0, 00:12:07.326 "data_size": 65536 00:12:07.326 }, 00:12:07.326 { 00:12:07.326 "name": "BaseBdev3", 00:12:07.326 "uuid": "9df42278-c1a6-4f1d-842b-b1d03d01c1ad", 00:12:07.326 "is_configured": true, 00:12:07.326 "data_offset": 0, 00:12:07.326 "data_size": 65536 00:12:07.326 } 00:12:07.326 ] 00:12:07.326 }' 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.326 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 [2024-11-04 14:38:06.934307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.894 BaseBdev1 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 [ 00:12:07.894 { 00:12:07.894 "name": "BaseBdev1", 00:12:07.894 "aliases": [ 00:12:07.894 "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f" 00:12:07.894 ], 00:12:07.894 "product_name": "Malloc disk", 00:12:07.894 "block_size": 512, 00:12:07.894 "num_blocks": 65536, 00:12:07.894 "uuid": "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f", 00:12:07.894 "assigned_rate_limits": { 00:12:07.894 "rw_ios_per_sec": 0, 00:12:07.894 "rw_mbytes_per_sec": 0, 00:12:07.894 "r_mbytes_per_sec": 0, 00:12:07.894 "w_mbytes_per_sec": 0 00:12:07.894 }, 00:12:07.894 "claimed": true, 00:12:07.894 "claim_type": "exclusive_write", 00:12:07.894 "zoned": false, 00:12:07.894 "supported_io_types": { 00:12:07.894 "read": true, 00:12:07.894 "write": true, 00:12:07.894 "unmap": true, 00:12:07.894 "flush": true, 00:12:07.894 "reset": true, 00:12:07.894 "nvme_admin": false, 00:12:07.894 "nvme_io": false, 00:12:07.894 "nvme_io_md": false, 00:12:07.894 "write_zeroes": true, 00:12:07.894 "zcopy": true, 00:12:07.894 "get_zone_info": false, 00:12:07.894 "zone_management": false, 00:12:07.894 "zone_append": false, 00:12:07.894 "compare": false, 00:12:07.894 "compare_and_write": false, 00:12:07.894 "abort": true, 00:12:07.894 "seek_hole": false, 00:12:07.894 "seek_data": false, 00:12:07.894 "copy": true, 00:12:07.894 "nvme_iov_md": false 00:12:07.894 }, 00:12:07.894 "memory_domains": [ 00:12:07.894 { 00:12:07.894 "dma_device_id": "system", 00:12:07.894 "dma_device_type": 1 00:12:07.894 }, 00:12:07.894 { 00:12:07.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.894 "dma_device_type": 2 00:12:07.894 } 00:12:07.894 ], 00:12:07.894 "driver_specific": {} 00:12:07.894 } 00:12:07.894 ] 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.894 14:38:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.153 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.153 "name": "Existed_Raid", 00:12:08.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.153 "strip_size_kb": 0, 00:12:08.153 "state": "configuring", 00:12:08.153 "raid_level": "raid1", 00:12:08.153 "superblock": false, 00:12:08.153 "num_base_bdevs": 3, 00:12:08.153 "num_base_bdevs_discovered": 2, 00:12:08.153 "num_base_bdevs_operational": 3, 00:12:08.153 "base_bdevs_list": [ 00:12:08.153 { 00:12:08.153 "name": "BaseBdev1", 00:12:08.153 "uuid": "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f", 00:12:08.153 "is_configured": true, 00:12:08.153 "data_offset": 0, 00:12:08.153 "data_size": 65536 00:12:08.153 }, 00:12:08.153 { 00:12:08.153 "name": null, 00:12:08.153 "uuid": "aa53d8be-d023-449a-a555-120c10ecab0b", 00:12:08.153 "is_configured": false, 00:12:08.153 "data_offset": 0, 00:12:08.153 "data_size": 65536 00:12:08.153 }, 00:12:08.153 { 00:12:08.153 "name": "BaseBdev3", 00:12:08.153 "uuid": "9df42278-c1a6-4f1d-842b-b1d03d01c1ad", 00:12:08.153 "is_configured": true, 00:12:08.153 "data_offset": 0, 00:12:08.153 "data_size": 65536 00:12:08.153 } 00:12:08.153 ] 00:12:08.153 }' 00:12:08.153 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.153 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.415 [2024-11-04 14:38:07.518473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.415 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.676 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.676 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.676 "name": "Existed_Raid", 00:12:08.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.676 "strip_size_kb": 0, 00:12:08.676 "state": "configuring", 00:12:08.676 "raid_level": "raid1", 00:12:08.676 "superblock": false, 00:12:08.676 "num_base_bdevs": 3, 00:12:08.676 "num_base_bdevs_discovered": 1, 00:12:08.676 "num_base_bdevs_operational": 3, 00:12:08.676 "base_bdevs_list": [ 00:12:08.676 { 00:12:08.676 "name": "BaseBdev1", 00:12:08.676 "uuid": "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f", 00:12:08.676 "is_configured": true, 00:12:08.676 "data_offset": 0, 00:12:08.676 "data_size": 65536 00:12:08.676 }, 00:12:08.676 { 00:12:08.676 "name": null, 00:12:08.676 "uuid": "aa53d8be-d023-449a-a555-120c10ecab0b", 00:12:08.676 "is_configured": false, 00:12:08.676 "data_offset": 0, 00:12:08.676 "data_size": 65536 00:12:08.676 }, 00:12:08.676 { 00:12:08.676 "name": null, 00:12:08.676 "uuid": "9df42278-c1a6-4f1d-842b-b1d03d01c1ad", 00:12:08.676 "is_configured": false, 00:12:08.676 "data_offset": 0, 00:12:08.676 "data_size": 65536 00:12:08.676 } 00:12:08.676 ] 00:12:08.676 }' 00:12:08.676 14:38:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.676 14:38:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.934 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.934 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.934 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.934 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 [2024-11-04 14:38:08.114712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.193 "name": "Existed_Raid", 00:12:09.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.193 "strip_size_kb": 0, 00:12:09.193 "state": "configuring", 00:12:09.193 "raid_level": "raid1", 00:12:09.193 "superblock": false, 00:12:09.193 "num_base_bdevs": 3, 00:12:09.193 "num_base_bdevs_discovered": 2, 00:12:09.193 "num_base_bdevs_operational": 3, 00:12:09.193 "base_bdevs_list": [ 00:12:09.193 { 00:12:09.193 "name": "BaseBdev1", 00:12:09.193 "uuid": "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f", 00:12:09.193 "is_configured": true, 00:12:09.193 "data_offset": 0, 00:12:09.193 "data_size": 65536 00:12:09.193 }, 00:12:09.193 { 00:12:09.193 "name": null, 00:12:09.193 "uuid": "aa53d8be-d023-449a-a555-120c10ecab0b", 00:12:09.193 "is_configured": false, 00:12:09.193 "data_offset": 0, 00:12:09.193 "data_size": 65536 00:12:09.193 }, 00:12:09.193 { 00:12:09.193 "name": "BaseBdev3", 00:12:09.193 "uuid": "9df42278-c1a6-4f1d-842b-b1d03d01c1ad", 00:12:09.193 "is_configured": true, 00:12:09.193 "data_offset": 0, 00:12:09.193 "data_size": 65536 00:12:09.193 } 00:12:09.193 ] 00:12:09.193 }' 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.193 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 [2024-11-04 14:38:08.682861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.761 "name": "Existed_Raid", 00:12:09.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.761 "strip_size_kb": 0, 00:12:09.761 "state": "configuring", 00:12:09.761 "raid_level": "raid1", 00:12:09.761 "superblock": false, 00:12:09.761 "num_base_bdevs": 3, 00:12:09.761 "num_base_bdevs_discovered": 1, 00:12:09.761 "num_base_bdevs_operational": 3, 00:12:09.761 "base_bdevs_list": [ 00:12:09.761 { 00:12:09.761 "name": null, 00:12:09.761 "uuid": "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f", 00:12:09.761 "is_configured": false, 00:12:09.761 "data_offset": 0, 00:12:09.761 "data_size": 65536 00:12:09.761 }, 00:12:09.761 { 00:12:09.761 "name": null, 00:12:09.761 "uuid": "aa53d8be-d023-449a-a555-120c10ecab0b", 00:12:09.761 "is_configured": false, 00:12:09.761 "data_offset": 0, 00:12:09.761 "data_size": 65536 00:12:09.761 }, 00:12:09.761 { 00:12:09.761 "name": "BaseBdev3", 00:12:09.761 "uuid": "9df42278-c1a6-4f1d-842b-b1d03d01c1ad", 00:12:09.761 "is_configured": true, 00:12:09.761 "data_offset": 0, 00:12:09.761 "data_size": 65536 00:12:09.761 } 00:12:09.761 ] 00:12:09.761 }' 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.761 14:38:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.329 [2024-11-04 14:38:09.339853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.329 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.329 "name": "Existed_Raid", 00:12:10.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.329 "strip_size_kb": 0, 00:12:10.329 "state": "configuring", 00:12:10.329 "raid_level": "raid1", 00:12:10.329 "superblock": false, 00:12:10.329 "num_base_bdevs": 3, 00:12:10.329 "num_base_bdevs_discovered": 2, 00:12:10.329 "num_base_bdevs_operational": 3, 00:12:10.329 "base_bdevs_list": [ 00:12:10.329 { 00:12:10.329 "name": null, 00:12:10.329 "uuid": "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f", 00:12:10.330 "is_configured": false, 00:12:10.330 "data_offset": 0, 00:12:10.330 "data_size": 65536 00:12:10.330 }, 00:12:10.330 { 00:12:10.330 "name": "BaseBdev2", 00:12:10.330 "uuid": "aa53d8be-d023-449a-a555-120c10ecab0b", 00:12:10.330 "is_configured": true, 00:12:10.330 "data_offset": 0, 00:12:10.330 "data_size": 65536 00:12:10.330 }, 00:12:10.330 { 00:12:10.330 "name": "BaseBdev3", 00:12:10.330 "uuid": "9df42278-c1a6-4f1d-842b-b1d03d01c1ad", 00:12:10.330 "is_configured": true, 00:12:10.330 "data_offset": 0, 00:12:10.330 "data_size": 65536 00:12:10.330 } 00:12:10.330 ] 00:12:10.330 }' 00:12:10.330 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.330 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cc5abaf1-54d9-4873-84a4-fb9486c8cc7f 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.897 [2024-11-04 14:38:09.989868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:10.897 [2024-11-04 14:38:09.989960] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:10.897 [2024-11-04 14:38:09.989975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:10.897 [2024-11-04 14:38:09.990310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:10.897 [2024-11-04 14:38:09.990529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:10.897 [2024-11-04 14:38:09.990568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:10.897 [2024-11-04 14:38:09.990886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.897 NewBaseBdev 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.897 14:38:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.897 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.897 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:10.897 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.897 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.897 [ 00:12:10.897 { 00:12:10.897 "name": "NewBaseBdev", 00:12:10.897 "aliases": [ 00:12:10.897 "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f" 00:12:10.897 ], 00:12:10.897 "product_name": "Malloc disk", 00:12:10.897 "block_size": 512, 00:12:10.897 "num_blocks": 65536, 00:12:10.897 "uuid": "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f", 00:12:10.897 "assigned_rate_limits": { 00:12:10.897 "rw_ios_per_sec": 0, 00:12:10.897 "rw_mbytes_per_sec": 0, 00:12:10.897 "r_mbytes_per_sec": 0, 00:12:10.897 "w_mbytes_per_sec": 0 00:12:10.897 }, 00:12:10.897 "claimed": true, 00:12:10.897 "claim_type": "exclusive_write", 00:12:10.897 "zoned": false, 00:12:10.897 "supported_io_types": { 00:12:10.897 "read": true, 00:12:10.897 "write": true, 00:12:10.897 "unmap": true, 00:12:10.897 "flush": true, 00:12:10.897 "reset": true, 00:12:10.897 "nvme_admin": false, 00:12:10.897 "nvme_io": false, 00:12:10.897 "nvme_io_md": false, 00:12:10.897 "write_zeroes": true, 00:12:10.897 "zcopy": true, 00:12:11.199 "get_zone_info": false, 00:12:11.199 "zone_management": false, 00:12:11.199 "zone_append": false, 00:12:11.199 "compare": false, 00:12:11.199 "compare_and_write": false, 00:12:11.199 "abort": true, 00:12:11.199 "seek_hole": false, 00:12:11.199 "seek_data": false, 00:12:11.199 "copy": true, 00:12:11.199 "nvme_iov_md": false 00:12:11.199 }, 00:12:11.199 "memory_domains": [ 00:12:11.199 { 00:12:11.199 "dma_device_id": "system", 00:12:11.199 "dma_device_type": 1 00:12:11.199 }, 00:12:11.199 { 00:12:11.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.199 "dma_device_type": 2 00:12:11.199 } 00:12:11.199 ], 00:12:11.199 "driver_specific": {} 00:12:11.199 } 00:12:11.199 ] 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.199 "name": "Existed_Raid", 00:12:11.199 "uuid": "e4567eb4-9025-4a56-a3ed-94313038a48d", 00:12:11.199 "strip_size_kb": 0, 00:12:11.199 "state": "online", 00:12:11.199 "raid_level": "raid1", 00:12:11.199 "superblock": false, 00:12:11.199 "num_base_bdevs": 3, 00:12:11.199 "num_base_bdevs_discovered": 3, 00:12:11.199 "num_base_bdevs_operational": 3, 00:12:11.199 "base_bdevs_list": [ 00:12:11.199 { 00:12:11.199 "name": "NewBaseBdev", 00:12:11.199 "uuid": "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f", 00:12:11.199 "is_configured": true, 00:12:11.199 "data_offset": 0, 00:12:11.199 "data_size": 65536 00:12:11.199 }, 00:12:11.199 { 00:12:11.199 "name": "BaseBdev2", 00:12:11.199 "uuid": "aa53d8be-d023-449a-a555-120c10ecab0b", 00:12:11.199 "is_configured": true, 00:12:11.199 "data_offset": 0, 00:12:11.199 "data_size": 65536 00:12:11.199 }, 00:12:11.199 { 00:12:11.199 "name": "BaseBdev3", 00:12:11.199 "uuid": "9df42278-c1a6-4f1d-842b-b1d03d01c1ad", 00:12:11.199 "is_configured": true, 00:12:11.199 "data_offset": 0, 00:12:11.199 "data_size": 65536 00:12:11.199 } 00:12:11.199 ] 00:12:11.199 }' 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.199 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.458 [2024-11-04 14:38:10.550517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.458 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:11.718 "name": "Existed_Raid", 00:12:11.718 "aliases": [ 00:12:11.718 "e4567eb4-9025-4a56-a3ed-94313038a48d" 00:12:11.718 ], 00:12:11.718 "product_name": "Raid Volume", 00:12:11.718 "block_size": 512, 00:12:11.718 "num_blocks": 65536, 00:12:11.718 "uuid": "e4567eb4-9025-4a56-a3ed-94313038a48d", 00:12:11.718 "assigned_rate_limits": { 00:12:11.718 "rw_ios_per_sec": 0, 00:12:11.718 "rw_mbytes_per_sec": 0, 00:12:11.718 "r_mbytes_per_sec": 0, 00:12:11.718 "w_mbytes_per_sec": 0 00:12:11.718 }, 00:12:11.718 "claimed": false, 00:12:11.718 "zoned": false, 00:12:11.718 "supported_io_types": { 00:12:11.718 "read": true, 00:12:11.718 "write": true, 00:12:11.718 "unmap": false, 00:12:11.718 "flush": false, 00:12:11.718 "reset": true, 00:12:11.718 "nvme_admin": false, 00:12:11.718 "nvme_io": false, 00:12:11.718 "nvme_io_md": false, 00:12:11.718 "write_zeroes": true, 00:12:11.718 "zcopy": false, 00:12:11.718 "get_zone_info": false, 00:12:11.718 "zone_management": false, 00:12:11.718 "zone_append": false, 00:12:11.718 "compare": false, 00:12:11.718 "compare_and_write": false, 00:12:11.718 "abort": false, 00:12:11.718 "seek_hole": false, 00:12:11.718 "seek_data": false, 00:12:11.718 "copy": false, 00:12:11.718 "nvme_iov_md": false 00:12:11.718 }, 00:12:11.718 "memory_domains": [ 00:12:11.718 { 00:12:11.718 "dma_device_id": "system", 00:12:11.718 "dma_device_type": 1 00:12:11.718 }, 00:12:11.718 { 00:12:11.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.718 "dma_device_type": 2 00:12:11.718 }, 00:12:11.718 { 00:12:11.718 "dma_device_id": "system", 00:12:11.718 "dma_device_type": 1 00:12:11.718 }, 00:12:11.718 { 00:12:11.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.718 "dma_device_type": 2 00:12:11.718 }, 00:12:11.718 { 00:12:11.718 "dma_device_id": "system", 00:12:11.718 "dma_device_type": 1 00:12:11.718 }, 00:12:11.718 { 00:12:11.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.718 "dma_device_type": 2 00:12:11.718 } 00:12:11.718 ], 00:12:11.718 "driver_specific": { 00:12:11.718 "raid": { 00:12:11.718 "uuid": "e4567eb4-9025-4a56-a3ed-94313038a48d", 00:12:11.718 "strip_size_kb": 0, 00:12:11.718 "state": "online", 00:12:11.718 "raid_level": "raid1", 00:12:11.718 "superblock": false, 00:12:11.718 "num_base_bdevs": 3, 00:12:11.718 "num_base_bdevs_discovered": 3, 00:12:11.718 "num_base_bdevs_operational": 3, 00:12:11.718 "base_bdevs_list": [ 00:12:11.718 { 00:12:11.718 "name": "NewBaseBdev", 00:12:11.718 "uuid": "cc5abaf1-54d9-4873-84a4-fb9486c8cc7f", 00:12:11.718 "is_configured": true, 00:12:11.718 "data_offset": 0, 00:12:11.718 "data_size": 65536 00:12:11.718 }, 00:12:11.718 { 00:12:11.718 "name": "BaseBdev2", 00:12:11.718 "uuid": "aa53d8be-d023-449a-a555-120c10ecab0b", 00:12:11.718 "is_configured": true, 00:12:11.718 "data_offset": 0, 00:12:11.718 "data_size": 65536 00:12:11.718 }, 00:12:11.718 { 00:12:11.718 "name": "BaseBdev3", 00:12:11.718 "uuid": "9df42278-c1a6-4f1d-842b-b1d03d01c1ad", 00:12:11.718 "is_configured": true, 00:12:11.718 "data_offset": 0, 00:12:11.718 "data_size": 65536 00:12:11.718 } 00:12:11.718 ] 00:12:11.718 } 00:12:11.718 } 00:12:11.718 }' 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:11.718 BaseBdev2 00:12:11.718 BaseBdev3' 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.718 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.978 [2024-11-04 14:38:10.866171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.978 [2024-11-04 14:38:10.866351] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.978 [2024-11-04 14:38:10.866458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.978 [2024-11-04 14:38:10.866815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.978 [2024-11-04 14:38:10.866833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67436 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67436 ']' 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67436 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67436 00:12:11.978 killing process with pid 67436 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67436' 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67436 00:12:11.978 [2024-11-04 14:38:10.905722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.978 14:38:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67436 00:12:12.237 [2024-11-04 14:38:11.173422] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.197 ************************************ 00:12:13.197 END TEST raid_state_function_test 00:12:13.197 ************************************ 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:13.197 00:12:13.197 real 0m11.839s 00:12:13.197 user 0m19.676s 00:12:13.197 sys 0m1.600s 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.197 14:38:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:12:13.197 14:38:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:13.197 14:38:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:13.197 14:38:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.197 ************************************ 00:12:13.197 START TEST raid_state_function_test_sb 00:12:13.197 ************************************ 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:13.197 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:13.198 Process raid pid: 68074 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68074 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68074' 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68074 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68074 ']' 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:13.198 14:38:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.472 [2024-11-04 14:38:12.371637] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:12:13.472 [2024-11-04 14:38:12.372087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.472 [2024-11-04 14:38:12.565633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.731 [2024-11-04 14:38:12.723074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.989 [2024-11-04 14:38:12.928413] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.989 [2024-11-04 14:38:12.928708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.248 [2024-11-04 14:38:13.346621] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:14.248 [2024-11-04 14:38:13.346848] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:14.248 [2024-11-04 14:38:13.346878] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.248 [2024-11-04 14:38:13.346897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.248 [2024-11-04 14:38:13.346908] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.248 [2024-11-04 14:38:13.346922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.248 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.507 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.507 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.507 "name": "Existed_Raid", 00:12:14.507 "uuid": "a78366ed-a54b-4932-92ba-8ddf9fda6fa9", 00:12:14.507 "strip_size_kb": 0, 00:12:14.507 "state": "configuring", 00:12:14.507 "raid_level": "raid1", 00:12:14.507 "superblock": true, 00:12:14.507 "num_base_bdevs": 3, 00:12:14.507 "num_base_bdevs_discovered": 0, 00:12:14.507 "num_base_bdevs_operational": 3, 00:12:14.507 "base_bdevs_list": [ 00:12:14.507 { 00:12:14.507 "name": "BaseBdev1", 00:12:14.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.507 "is_configured": false, 00:12:14.507 "data_offset": 0, 00:12:14.507 "data_size": 0 00:12:14.507 }, 00:12:14.507 { 00:12:14.507 "name": "BaseBdev2", 00:12:14.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.507 "is_configured": false, 00:12:14.507 "data_offset": 0, 00:12:14.507 "data_size": 0 00:12:14.507 }, 00:12:14.507 { 00:12:14.507 "name": "BaseBdev3", 00:12:14.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.507 "is_configured": false, 00:12:14.507 "data_offset": 0, 00:12:14.507 "data_size": 0 00:12:14.507 } 00:12:14.507 ] 00:12:14.507 }' 00:12:14.507 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.507 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.766 [2024-11-04 14:38:13.862701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:14.766 [2024-11-04 14:38:13.862742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.766 [2024-11-04 14:38:13.870670] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:14.766 [2024-11-04 14:38:13.870725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:14.766 [2024-11-04 14:38:13.870740] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:14.766 [2024-11-04 14:38:13.870756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:14.766 [2024-11-04 14:38:13.870765] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:14.766 [2024-11-04 14:38:13.870779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.766 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.025 [2024-11-04 14:38:13.915247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.025 BaseBdev1 00:12:15.025 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.025 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:15.025 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:15.025 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:15.025 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:15.025 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.026 [ 00:12:15.026 { 00:12:15.026 "name": "BaseBdev1", 00:12:15.026 "aliases": [ 00:12:15.026 "e4fe822d-d664-46b9-a315-a08adccdda2a" 00:12:15.026 ], 00:12:15.026 "product_name": "Malloc disk", 00:12:15.026 "block_size": 512, 00:12:15.026 "num_blocks": 65536, 00:12:15.026 "uuid": "e4fe822d-d664-46b9-a315-a08adccdda2a", 00:12:15.026 "assigned_rate_limits": { 00:12:15.026 "rw_ios_per_sec": 0, 00:12:15.026 "rw_mbytes_per_sec": 0, 00:12:15.026 "r_mbytes_per_sec": 0, 00:12:15.026 "w_mbytes_per_sec": 0 00:12:15.026 }, 00:12:15.026 "claimed": true, 00:12:15.026 "claim_type": "exclusive_write", 00:12:15.026 "zoned": false, 00:12:15.026 "supported_io_types": { 00:12:15.026 "read": true, 00:12:15.026 "write": true, 00:12:15.026 "unmap": true, 00:12:15.026 "flush": true, 00:12:15.026 "reset": true, 00:12:15.026 "nvme_admin": false, 00:12:15.026 "nvme_io": false, 00:12:15.026 "nvme_io_md": false, 00:12:15.026 "write_zeroes": true, 00:12:15.026 "zcopy": true, 00:12:15.026 "get_zone_info": false, 00:12:15.026 "zone_management": false, 00:12:15.026 "zone_append": false, 00:12:15.026 "compare": false, 00:12:15.026 "compare_and_write": false, 00:12:15.026 "abort": true, 00:12:15.026 "seek_hole": false, 00:12:15.026 "seek_data": false, 00:12:15.026 "copy": true, 00:12:15.026 "nvme_iov_md": false 00:12:15.026 }, 00:12:15.026 "memory_domains": [ 00:12:15.026 { 00:12:15.026 "dma_device_id": "system", 00:12:15.026 "dma_device_type": 1 00:12:15.026 }, 00:12:15.026 { 00:12:15.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.026 "dma_device_type": 2 00:12:15.026 } 00:12:15.026 ], 00:12:15.026 "driver_specific": {} 00:12:15.026 } 00:12:15.026 ] 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.026 "name": "Existed_Raid", 00:12:15.026 "uuid": "411e0965-07cb-4dcf-aa0f-31dae6c89823", 00:12:15.026 "strip_size_kb": 0, 00:12:15.026 "state": "configuring", 00:12:15.026 "raid_level": "raid1", 00:12:15.026 "superblock": true, 00:12:15.026 "num_base_bdevs": 3, 00:12:15.026 "num_base_bdevs_discovered": 1, 00:12:15.026 "num_base_bdevs_operational": 3, 00:12:15.026 "base_bdevs_list": [ 00:12:15.026 { 00:12:15.026 "name": "BaseBdev1", 00:12:15.026 "uuid": "e4fe822d-d664-46b9-a315-a08adccdda2a", 00:12:15.026 "is_configured": true, 00:12:15.026 "data_offset": 2048, 00:12:15.026 "data_size": 63488 00:12:15.026 }, 00:12:15.026 { 00:12:15.026 "name": "BaseBdev2", 00:12:15.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.026 "is_configured": false, 00:12:15.026 "data_offset": 0, 00:12:15.026 "data_size": 0 00:12:15.026 }, 00:12:15.026 { 00:12:15.026 "name": "BaseBdev3", 00:12:15.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.026 "is_configured": false, 00:12:15.026 "data_offset": 0, 00:12:15.026 "data_size": 0 00:12:15.026 } 00:12:15.026 ] 00:12:15.026 }' 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.026 14:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.593 [2024-11-04 14:38:14.459442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:15.593 [2024-11-04 14:38:14.459506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.593 [2024-11-04 14:38:14.467514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.593 [2024-11-04 14:38:14.470022] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:15.593 [2024-11-04 14:38:14.470077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:15.593 [2024-11-04 14:38:14.470093] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:15.593 [2024-11-04 14:38:14.470109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.593 "name": "Existed_Raid", 00:12:15.593 "uuid": "6a5b0ae8-1369-44ee-a5e0-d0dea002fa1b", 00:12:15.593 "strip_size_kb": 0, 00:12:15.593 "state": "configuring", 00:12:15.593 "raid_level": "raid1", 00:12:15.593 "superblock": true, 00:12:15.593 "num_base_bdevs": 3, 00:12:15.593 "num_base_bdevs_discovered": 1, 00:12:15.593 "num_base_bdevs_operational": 3, 00:12:15.593 "base_bdevs_list": [ 00:12:15.593 { 00:12:15.593 "name": "BaseBdev1", 00:12:15.593 "uuid": "e4fe822d-d664-46b9-a315-a08adccdda2a", 00:12:15.593 "is_configured": true, 00:12:15.593 "data_offset": 2048, 00:12:15.593 "data_size": 63488 00:12:15.593 }, 00:12:15.593 { 00:12:15.593 "name": "BaseBdev2", 00:12:15.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.593 "is_configured": false, 00:12:15.593 "data_offset": 0, 00:12:15.593 "data_size": 0 00:12:15.593 }, 00:12:15.593 { 00:12:15.593 "name": "BaseBdev3", 00:12:15.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.593 "is_configured": false, 00:12:15.593 "data_offset": 0, 00:12:15.593 "data_size": 0 00:12:15.593 } 00:12:15.593 ] 00:12:15.593 }' 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.593 14:38:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.159 [2024-11-04 14:38:15.041494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:16.159 BaseBdev2 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.159 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.160 [ 00:12:16.160 { 00:12:16.160 "name": "BaseBdev2", 00:12:16.160 "aliases": [ 00:12:16.160 "5f92098a-b875-473b-b7db-e28fe11fff77" 00:12:16.160 ], 00:12:16.160 "product_name": "Malloc disk", 00:12:16.160 "block_size": 512, 00:12:16.160 "num_blocks": 65536, 00:12:16.160 "uuid": "5f92098a-b875-473b-b7db-e28fe11fff77", 00:12:16.160 "assigned_rate_limits": { 00:12:16.160 "rw_ios_per_sec": 0, 00:12:16.160 "rw_mbytes_per_sec": 0, 00:12:16.160 "r_mbytes_per_sec": 0, 00:12:16.160 "w_mbytes_per_sec": 0 00:12:16.160 }, 00:12:16.160 "claimed": true, 00:12:16.160 "claim_type": "exclusive_write", 00:12:16.160 "zoned": false, 00:12:16.160 "supported_io_types": { 00:12:16.160 "read": true, 00:12:16.160 "write": true, 00:12:16.160 "unmap": true, 00:12:16.160 "flush": true, 00:12:16.160 "reset": true, 00:12:16.160 "nvme_admin": false, 00:12:16.160 "nvme_io": false, 00:12:16.160 "nvme_io_md": false, 00:12:16.160 "write_zeroes": true, 00:12:16.160 "zcopy": true, 00:12:16.160 "get_zone_info": false, 00:12:16.160 "zone_management": false, 00:12:16.160 "zone_append": false, 00:12:16.160 "compare": false, 00:12:16.160 "compare_and_write": false, 00:12:16.160 "abort": true, 00:12:16.160 "seek_hole": false, 00:12:16.160 "seek_data": false, 00:12:16.160 "copy": true, 00:12:16.160 "nvme_iov_md": false 00:12:16.160 }, 00:12:16.160 "memory_domains": [ 00:12:16.160 { 00:12:16.160 "dma_device_id": "system", 00:12:16.160 "dma_device_type": 1 00:12:16.160 }, 00:12:16.160 { 00:12:16.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.160 "dma_device_type": 2 00:12:16.160 } 00:12:16.160 ], 00:12:16.160 "driver_specific": {} 00:12:16.160 } 00:12:16.160 ] 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.160 "name": "Existed_Raid", 00:12:16.160 "uuid": "6a5b0ae8-1369-44ee-a5e0-d0dea002fa1b", 00:12:16.160 "strip_size_kb": 0, 00:12:16.160 "state": "configuring", 00:12:16.160 "raid_level": "raid1", 00:12:16.160 "superblock": true, 00:12:16.160 "num_base_bdevs": 3, 00:12:16.160 "num_base_bdevs_discovered": 2, 00:12:16.160 "num_base_bdevs_operational": 3, 00:12:16.160 "base_bdevs_list": [ 00:12:16.160 { 00:12:16.160 "name": "BaseBdev1", 00:12:16.160 "uuid": "e4fe822d-d664-46b9-a315-a08adccdda2a", 00:12:16.160 "is_configured": true, 00:12:16.160 "data_offset": 2048, 00:12:16.160 "data_size": 63488 00:12:16.160 }, 00:12:16.160 { 00:12:16.160 "name": "BaseBdev2", 00:12:16.160 "uuid": "5f92098a-b875-473b-b7db-e28fe11fff77", 00:12:16.160 "is_configured": true, 00:12:16.160 "data_offset": 2048, 00:12:16.160 "data_size": 63488 00:12:16.160 }, 00:12:16.160 { 00:12:16.160 "name": "BaseBdev3", 00:12:16.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.160 "is_configured": false, 00:12:16.160 "data_offset": 0, 00:12:16.160 "data_size": 0 00:12:16.160 } 00:12:16.160 ] 00:12:16.160 }' 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.160 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.728 [2024-11-04 14:38:15.650876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.728 [2024-11-04 14:38:15.651274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:16.728 [2024-11-04 14:38:15.651305] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.728 BaseBdev3 00:12:16.728 [2024-11-04 14:38:15.651652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:16.728 [2024-11-04 14:38:15.651863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:16.728 [2024-11-04 14:38:15.651879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:16.728 [2024-11-04 14:38:15.652082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.728 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.728 [ 00:12:16.728 { 00:12:16.728 "name": "BaseBdev3", 00:12:16.728 "aliases": [ 00:12:16.728 "fd519716-1009-4db5-b099-1c4cfbd1d466" 00:12:16.728 ], 00:12:16.728 "product_name": "Malloc disk", 00:12:16.728 "block_size": 512, 00:12:16.728 "num_blocks": 65536, 00:12:16.728 "uuid": "fd519716-1009-4db5-b099-1c4cfbd1d466", 00:12:16.728 "assigned_rate_limits": { 00:12:16.728 "rw_ios_per_sec": 0, 00:12:16.728 "rw_mbytes_per_sec": 0, 00:12:16.728 "r_mbytes_per_sec": 0, 00:12:16.728 "w_mbytes_per_sec": 0 00:12:16.728 }, 00:12:16.728 "claimed": true, 00:12:16.728 "claim_type": "exclusive_write", 00:12:16.728 "zoned": false, 00:12:16.728 "supported_io_types": { 00:12:16.728 "read": true, 00:12:16.728 "write": true, 00:12:16.728 "unmap": true, 00:12:16.728 "flush": true, 00:12:16.728 "reset": true, 00:12:16.728 "nvme_admin": false, 00:12:16.728 "nvme_io": false, 00:12:16.728 "nvme_io_md": false, 00:12:16.728 "write_zeroes": true, 00:12:16.728 "zcopy": true, 00:12:16.728 "get_zone_info": false, 00:12:16.728 "zone_management": false, 00:12:16.728 "zone_append": false, 00:12:16.728 "compare": false, 00:12:16.728 "compare_and_write": false, 00:12:16.728 "abort": true, 00:12:16.728 "seek_hole": false, 00:12:16.728 "seek_data": false, 00:12:16.728 "copy": true, 00:12:16.728 "nvme_iov_md": false 00:12:16.728 }, 00:12:16.728 "memory_domains": [ 00:12:16.729 { 00:12:16.729 "dma_device_id": "system", 00:12:16.729 "dma_device_type": 1 00:12:16.729 }, 00:12:16.729 { 00:12:16.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.729 "dma_device_type": 2 00:12:16.729 } 00:12:16.729 ], 00:12:16.729 "driver_specific": {} 00:12:16.729 } 00:12:16.729 ] 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.729 "name": "Existed_Raid", 00:12:16.729 "uuid": "6a5b0ae8-1369-44ee-a5e0-d0dea002fa1b", 00:12:16.729 "strip_size_kb": 0, 00:12:16.729 "state": "online", 00:12:16.729 "raid_level": "raid1", 00:12:16.729 "superblock": true, 00:12:16.729 "num_base_bdevs": 3, 00:12:16.729 "num_base_bdevs_discovered": 3, 00:12:16.729 "num_base_bdevs_operational": 3, 00:12:16.729 "base_bdevs_list": [ 00:12:16.729 { 00:12:16.729 "name": "BaseBdev1", 00:12:16.729 "uuid": "e4fe822d-d664-46b9-a315-a08adccdda2a", 00:12:16.729 "is_configured": true, 00:12:16.729 "data_offset": 2048, 00:12:16.729 "data_size": 63488 00:12:16.729 }, 00:12:16.729 { 00:12:16.729 "name": "BaseBdev2", 00:12:16.729 "uuid": "5f92098a-b875-473b-b7db-e28fe11fff77", 00:12:16.729 "is_configured": true, 00:12:16.729 "data_offset": 2048, 00:12:16.729 "data_size": 63488 00:12:16.729 }, 00:12:16.729 { 00:12:16.729 "name": "BaseBdev3", 00:12:16.729 "uuid": "fd519716-1009-4db5-b099-1c4cfbd1d466", 00:12:16.729 "is_configured": true, 00:12:16.729 "data_offset": 2048, 00:12:16.729 "data_size": 63488 00:12:16.729 } 00:12:16.729 ] 00:12:16.729 }' 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.729 14:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:17.334 [2024-11-04 14:38:16.195487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.334 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:17.334 "name": "Existed_Raid", 00:12:17.334 "aliases": [ 00:12:17.334 "6a5b0ae8-1369-44ee-a5e0-d0dea002fa1b" 00:12:17.334 ], 00:12:17.334 "product_name": "Raid Volume", 00:12:17.334 "block_size": 512, 00:12:17.334 "num_blocks": 63488, 00:12:17.334 "uuid": "6a5b0ae8-1369-44ee-a5e0-d0dea002fa1b", 00:12:17.334 "assigned_rate_limits": { 00:12:17.334 "rw_ios_per_sec": 0, 00:12:17.334 "rw_mbytes_per_sec": 0, 00:12:17.334 "r_mbytes_per_sec": 0, 00:12:17.334 "w_mbytes_per_sec": 0 00:12:17.334 }, 00:12:17.334 "claimed": false, 00:12:17.334 "zoned": false, 00:12:17.334 "supported_io_types": { 00:12:17.334 "read": true, 00:12:17.334 "write": true, 00:12:17.334 "unmap": false, 00:12:17.334 "flush": false, 00:12:17.334 "reset": true, 00:12:17.334 "nvme_admin": false, 00:12:17.334 "nvme_io": false, 00:12:17.334 "nvme_io_md": false, 00:12:17.334 "write_zeroes": true, 00:12:17.334 "zcopy": false, 00:12:17.334 "get_zone_info": false, 00:12:17.334 "zone_management": false, 00:12:17.334 "zone_append": false, 00:12:17.334 "compare": false, 00:12:17.334 "compare_and_write": false, 00:12:17.334 "abort": false, 00:12:17.334 "seek_hole": false, 00:12:17.334 "seek_data": false, 00:12:17.334 "copy": false, 00:12:17.334 "nvme_iov_md": false 00:12:17.334 }, 00:12:17.334 "memory_domains": [ 00:12:17.334 { 00:12:17.334 "dma_device_id": "system", 00:12:17.334 "dma_device_type": 1 00:12:17.334 }, 00:12:17.334 { 00:12:17.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.334 "dma_device_type": 2 00:12:17.334 }, 00:12:17.334 { 00:12:17.334 "dma_device_id": "system", 00:12:17.334 "dma_device_type": 1 00:12:17.334 }, 00:12:17.334 { 00:12:17.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.334 "dma_device_type": 2 00:12:17.334 }, 00:12:17.334 { 00:12:17.334 "dma_device_id": "system", 00:12:17.334 "dma_device_type": 1 00:12:17.334 }, 00:12:17.334 { 00:12:17.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.334 "dma_device_type": 2 00:12:17.334 } 00:12:17.334 ], 00:12:17.334 "driver_specific": { 00:12:17.334 "raid": { 00:12:17.334 "uuid": "6a5b0ae8-1369-44ee-a5e0-d0dea002fa1b", 00:12:17.334 "strip_size_kb": 0, 00:12:17.334 "state": "online", 00:12:17.334 "raid_level": "raid1", 00:12:17.334 "superblock": true, 00:12:17.334 "num_base_bdevs": 3, 00:12:17.334 "num_base_bdevs_discovered": 3, 00:12:17.334 "num_base_bdevs_operational": 3, 00:12:17.334 "base_bdevs_list": [ 00:12:17.334 { 00:12:17.334 "name": "BaseBdev1", 00:12:17.334 "uuid": "e4fe822d-d664-46b9-a315-a08adccdda2a", 00:12:17.334 "is_configured": true, 00:12:17.334 "data_offset": 2048, 00:12:17.334 "data_size": 63488 00:12:17.334 }, 00:12:17.334 { 00:12:17.334 "name": "BaseBdev2", 00:12:17.334 "uuid": "5f92098a-b875-473b-b7db-e28fe11fff77", 00:12:17.334 "is_configured": true, 00:12:17.334 "data_offset": 2048, 00:12:17.334 "data_size": 63488 00:12:17.334 }, 00:12:17.334 { 00:12:17.334 "name": "BaseBdev3", 00:12:17.334 "uuid": "fd519716-1009-4db5-b099-1c4cfbd1d466", 00:12:17.334 "is_configured": true, 00:12:17.334 "data_offset": 2048, 00:12:17.334 "data_size": 63488 00:12:17.334 } 00:12:17.334 ] 00:12:17.335 } 00:12:17.335 } 00:12:17.335 }' 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:17.335 BaseBdev2 00:12:17.335 BaseBdev3' 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.335 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.622 [2024-11-04 14:38:16.503279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.622 "name": "Existed_Raid", 00:12:17.622 "uuid": "6a5b0ae8-1369-44ee-a5e0-d0dea002fa1b", 00:12:17.622 "strip_size_kb": 0, 00:12:17.622 "state": "online", 00:12:17.622 "raid_level": "raid1", 00:12:17.622 "superblock": true, 00:12:17.622 "num_base_bdevs": 3, 00:12:17.622 "num_base_bdevs_discovered": 2, 00:12:17.622 "num_base_bdevs_operational": 2, 00:12:17.622 "base_bdevs_list": [ 00:12:17.622 { 00:12:17.622 "name": null, 00:12:17.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.622 "is_configured": false, 00:12:17.622 "data_offset": 0, 00:12:17.622 "data_size": 63488 00:12:17.622 }, 00:12:17.622 { 00:12:17.622 "name": "BaseBdev2", 00:12:17.622 "uuid": "5f92098a-b875-473b-b7db-e28fe11fff77", 00:12:17.622 "is_configured": true, 00:12:17.622 "data_offset": 2048, 00:12:17.622 "data_size": 63488 00:12:17.622 }, 00:12:17.622 { 00:12:17.622 "name": "BaseBdev3", 00:12:17.622 "uuid": "fd519716-1009-4db5-b099-1c4cfbd1d466", 00:12:17.622 "is_configured": true, 00:12:17.622 "data_offset": 2048, 00:12:17.622 "data_size": 63488 00:12:17.622 } 00:12:17.622 ] 00:12:17.622 }' 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.622 14:38:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.188 [2024-11-04 14:38:17.181804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:18.188 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.447 [2024-11-04 14:38:17.323781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:18.447 [2024-11-04 14:38:17.323904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.447 [2024-11-04 14:38:17.411999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.447 [2024-11-04 14:38:17.412066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.447 [2024-11-04 14:38:17.412085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.447 BaseBdev2 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.447 [ 00:12:18.447 { 00:12:18.447 "name": "BaseBdev2", 00:12:18.447 "aliases": [ 00:12:18.447 "18a2680a-0096-4111-bf5d-7ac71e00d37b" 00:12:18.447 ], 00:12:18.447 "product_name": "Malloc disk", 00:12:18.447 "block_size": 512, 00:12:18.447 "num_blocks": 65536, 00:12:18.447 "uuid": "18a2680a-0096-4111-bf5d-7ac71e00d37b", 00:12:18.447 "assigned_rate_limits": { 00:12:18.447 "rw_ios_per_sec": 0, 00:12:18.447 "rw_mbytes_per_sec": 0, 00:12:18.447 "r_mbytes_per_sec": 0, 00:12:18.447 "w_mbytes_per_sec": 0 00:12:18.447 }, 00:12:18.447 "claimed": false, 00:12:18.447 "zoned": false, 00:12:18.447 "supported_io_types": { 00:12:18.447 "read": true, 00:12:18.447 "write": true, 00:12:18.447 "unmap": true, 00:12:18.447 "flush": true, 00:12:18.447 "reset": true, 00:12:18.447 "nvme_admin": false, 00:12:18.447 "nvme_io": false, 00:12:18.447 "nvme_io_md": false, 00:12:18.447 "write_zeroes": true, 00:12:18.447 "zcopy": true, 00:12:18.447 "get_zone_info": false, 00:12:18.447 "zone_management": false, 00:12:18.447 "zone_append": false, 00:12:18.447 "compare": false, 00:12:18.447 "compare_and_write": false, 00:12:18.447 "abort": true, 00:12:18.447 "seek_hole": false, 00:12:18.447 "seek_data": false, 00:12:18.447 "copy": true, 00:12:18.447 "nvme_iov_md": false 00:12:18.447 }, 00:12:18.447 "memory_domains": [ 00:12:18.447 { 00:12:18.447 "dma_device_id": "system", 00:12:18.447 "dma_device_type": 1 00:12:18.447 }, 00:12:18.447 { 00:12:18.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.447 "dma_device_type": 2 00:12:18.447 } 00:12:18.447 ], 00:12:18.447 "driver_specific": {} 00:12:18.447 } 00:12:18.447 ] 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:18.447 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.448 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.448 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:18.448 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.448 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.707 BaseBdev3 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.707 [ 00:12:18.707 { 00:12:18.707 "name": "BaseBdev3", 00:12:18.707 "aliases": [ 00:12:18.707 "6306c444-f63c-41f8-a14f-34673f453495" 00:12:18.707 ], 00:12:18.707 "product_name": "Malloc disk", 00:12:18.707 "block_size": 512, 00:12:18.707 "num_blocks": 65536, 00:12:18.707 "uuid": "6306c444-f63c-41f8-a14f-34673f453495", 00:12:18.707 "assigned_rate_limits": { 00:12:18.707 "rw_ios_per_sec": 0, 00:12:18.707 "rw_mbytes_per_sec": 0, 00:12:18.707 "r_mbytes_per_sec": 0, 00:12:18.707 "w_mbytes_per_sec": 0 00:12:18.707 }, 00:12:18.707 "claimed": false, 00:12:18.707 "zoned": false, 00:12:18.707 "supported_io_types": { 00:12:18.707 "read": true, 00:12:18.707 "write": true, 00:12:18.707 "unmap": true, 00:12:18.707 "flush": true, 00:12:18.707 "reset": true, 00:12:18.707 "nvme_admin": false, 00:12:18.707 "nvme_io": false, 00:12:18.707 "nvme_io_md": false, 00:12:18.707 "write_zeroes": true, 00:12:18.707 "zcopy": true, 00:12:18.707 "get_zone_info": false, 00:12:18.707 "zone_management": false, 00:12:18.707 "zone_append": false, 00:12:18.707 "compare": false, 00:12:18.707 "compare_and_write": false, 00:12:18.707 "abort": true, 00:12:18.707 "seek_hole": false, 00:12:18.707 "seek_data": false, 00:12:18.707 "copy": true, 00:12:18.707 "nvme_iov_md": false 00:12:18.707 }, 00:12:18.707 "memory_domains": [ 00:12:18.707 { 00:12:18.707 "dma_device_id": "system", 00:12:18.707 "dma_device_type": 1 00:12:18.707 }, 00:12:18.707 { 00:12:18.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.707 "dma_device_type": 2 00:12:18.707 } 00:12:18.707 ], 00:12:18.707 "driver_specific": {} 00:12:18.707 } 00:12:18.707 ] 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.707 [2024-11-04 14:38:17.623832] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:18.707 [2024-11-04 14:38:17.623893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:18.707 [2024-11-04 14:38:17.623919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:18.707 [2024-11-04 14:38:17.626499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.707 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.707 "name": "Existed_Raid", 00:12:18.707 "uuid": "67ab550d-5ed4-42bf-90af-d7203ab63850", 00:12:18.707 "strip_size_kb": 0, 00:12:18.707 "state": "configuring", 00:12:18.707 "raid_level": "raid1", 00:12:18.707 "superblock": true, 00:12:18.707 "num_base_bdevs": 3, 00:12:18.707 "num_base_bdevs_discovered": 2, 00:12:18.707 "num_base_bdevs_operational": 3, 00:12:18.707 "base_bdevs_list": [ 00:12:18.707 { 00:12:18.707 "name": "BaseBdev1", 00:12:18.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:18.707 "is_configured": false, 00:12:18.707 "data_offset": 0, 00:12:18.707 "data_size": 0 00:12:18.707 }, 00:12:18.707 { 00:12:18.707 "name": "BaseBdev2", 00:12:18.707 "uuid": "18a2680a-0096-4111-bf5d-7ac71e00d37b", 00:12:18.707 "is_configured": true, 00:12:18.707 "data_offset": 2048, 00:12:18.707 "data_size": 63488 00:12:18.707 }, 00:12:18.707 { 00:12:18.707 "name": "BaseBdev3", 00:12:18.707 "uuid": "6306c444-f63c-41f8-a14f-34673f453495", 00:12:18.707 "is_configured": true, 00:12:18.708 "data_offset": 2048, 00:12:18.708 "data_size": 63488 00:12:18.708 } 00:12:18.708 ] 00:12:18.708 }' 00:12:18.708 14:38:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.708 14:38:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.275 [2024-11-04 14:38:18.164000] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.275 "name": "Existed_Raid", 00:12:19.275 "uuid": "67ab550d-5ed4-42bf-90af-d7203ab63850", 00:12:19.275 "strip_size_kb": 0, 00:12:19.275 "state": "configuring", 00:12:19.275 "raid_level": "raid1", 00:12:19.275 "superblock": true, 00:12:19.275 "num_base_bdevs": 3, 00:12:19.275 "num_base_bdevs_discovered": 1, 00:12:19.275 "num_base_bdevs_operational": 3, 00:12:19.275 "base_bdevs_list": [ 00:12:19.275 { 00:12:19.275 "name": "BaseBdev1", 00:12:19.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.275 "is_configured": false, 00:12:19.275 "data_offset": 0, 00:12:19.275 "data_size": 0 00:12:19.275 }, 00:12:19.275 { 00:12:19.275 "name": null, 00:12:19.275 "uuid": "18a2680a-0096-4111-bf5d-7ac71e00d37b", 00:12:19.275 "is_configured": false, 00:12:19.275 "data_offset": 0, 00:12:19.275 "data_size": 63488 00:12:19.275 }, 00:12:19.275 { 00:12:19.275 "name": "BaseBdev3", 00:12:19.275 "uuid": "6306c444-f63c-41f8-a14f-34673f453495", 00:12:19.275 "is_configured": true, 00:12:19.275 "data_offset": 2048, 00:12:19.275 "data_size": 63488 00:12:19.275 } 00:12:19.275 ] 00:12:19.275 }' 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.275 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.842 [2024-11-04 14:38:18.785715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.842 BaseBdev1 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.842 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.842 [ 00:12:19.842 { 00:12:19.842 "name": "BaseBdev1", 00:12:19.842 "aliases": [ 00:12:19.842 "495e58ca-ad1d-406b-8cdb-0d115544aa56" 00:12:19.842 ], 00:12:19.842 "product_name": "Malloc disk", 00:12:19.842 "block_size": 512, 00:12:19.842 "num_blocks": 65536, 00:12:19.842 "uuid": "495e58ca-ad1d-406b-8cdb-0d115544aa56", 00:12:19.842 "assigned_rate_limits": { 00:12:19.842 "rw_ios_per_sec": 0, 00:12:19.842 "rw_mbytes_per_sec": 0, 00:12:19.842 "r_mbytes_per_sec": 0, 00:12:19.842 "w_mbytes_per_sec": 0 00:12:19.842 }, 00:12:19.843 "claimed": true, 00:12:19.843 "claim_type": "exclusive_write", 00:12:19.843 "zoned": false, 00:12:19.843 "supported_io_types": { 00:12:19.843 "read": true, 00:12:19.843 "write": true, 00:12:19.843 "unmap": true, 00:12:19.843 "flush": true, 00:12:19.843 "reset": true, 00:12:19.843 "nvme_admin": false, 00:12:19.843 "nvme_io": false, 00:12:19.843 "nvme_io_md": false, 00:12:19.843 "write_zeroes": true, 00:12:19.843 "zcopy": true, 00:12:19.843 "get_zone_info": false, 00:12:19.843 "zone_management": false, 00:12:19.843 "zone_append": false, 00:12:19.843 "compare": false, 00:12:19.843 "compare_and_write": false, 00:12:19.843 "abort": true, 00:12:19.843 "seek_hole": false, 00:12:19.843 "seek_data": false, 00:12:19.843 "copy": true, 00:12:19.843 "nvme_iov_md": false 00:12:19.843 }, 00:12:19.843 "memory_domains": [ 00:12:19.843 { 00:12:19.843 "dma_device_id": "system", 00:12:19.843 "dma_device_type": 1 00:12:19.843 }, 00:12:19.843 { 00:12:19.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.843 "dma_device_type": 2 00:12:19.843 } 00:12:19.843 ], 00:12:19.843 "driver_specific": {} 00:12:19.843 } 00:12:19.843 ] 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.843 "name": "Existed_Raid", 00:12:19.843 "uuid": "67ab550d-5ed4-42bf-90af-d7203ab63850", 00:12:19.843 "strip_size_kb": 0, 00:12:19.843 "state": "configuring", 00:12:19.843 "raid_level": "raid1", 00:12:19.843 "superblock": true, 00:12:19.843 "num_base_bdevs": 3, 00:12:19.843 "num_base_bdevs_discovered": 2, 00:12:19.843 "num_base_bdevs_operational": 3, 00:12:19.843 "base_bdevs_list": [ 00:12:19.843 { 00:12:19.843 "name": "BaseBdev1", 00:12:19.843 "uuid": "495e58ca-ad1d-406b-8cdb-0d115544aa56", 00:12:19.843 "is_configured": true, 00:12:19.843 "data_offset": 2048, 00:12:19.843 "data_size": 63488 00:12:19.843 }, 00:12:19.843 { 00:12:19.843 "name": null, 00:12:19.843 "uuid": "18a2680a-0096-4111-bf5d-7ac71e00d37b", 00:12:19.843 "is_configured": false, 00:12:19.843 "data_offset": 0, 00:12:19.843 "data_size": 63488 00:12:19.843 }, 00:12:19.843 { 00:12:19.843 "name": "BaseBdev3", 00:12:19.843 "uuid": "6306c444-f63c-41f8-a14f-34673f453495", 00:12:19.843 "is_configured": true, 00:12:19.843 "data_offset": 2048, 00:12:19.843 "data_size": 63488 00:12:19.843 } 00:12:19.843 ] 00:12:19.843 }' 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.843 14:38:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.410 [2024-11-04 14:38:19.397965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.410 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.410 "name": "Existed_Raid", 00:12:20.410 "uuid": "67ab550d-5ed4-42bf-90af-d7203ab63850", 00:12:20.410 "strip_size_kb": 0, 00:12:20.410 "state": "configuring", 00:12:20.410 "raid_level": "raid1", 00:12:20.410 "superblock": true, 00:12:20.410 "num_base_bdevs": 3, 00:12:20.410 "num_base_bdevs_discovered": 1, 00:12:20.410 "num_base_bdevs_operational": 3, 00:12:20.410 "base_bdevs_list": [ 00:12:20.410 { 00:12:20.410 "name": "BaseBdev1", 00:12:20.410 "uuid": "495e58ca-ad1d-406b-8cdb-0d115544aa56", 00:12:20.410 "is_configured": true, 00:12:20.410 "data_offset": 2048, 00:12:20.410 "data_size": 63488 00:12:20.410 }, 00:12:20.410 { 00:12:20.410 "name": null, 00:12:20.410 "uuid": "18a2680a-0096-4111-bf5d-7ac71e00d37b", 00:12:20.410 "is_configured": false, 00:12:20.410 "data_offset": 0, 00:12:20.411 "data_size": 63488 00:12:20.411 }, 00:12:20.411 { 00:12:20.411 "name": null, 00:12:20.411 "uuid": "6306c444-f63c-41f8-a14f-34673f453495", 00:12:20.411 "is_configured": false, 00:12:20.411 "data_offset": 0, 00:12:20.411 "data_size": 63488 00:12:20.411 } 00:12:20.411 ] 00:12:20.411 }' 00:12:20.411 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.411 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.976 [2024-11-04 14:38:19.962150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.976 14:38:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.976 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.976 "name": "Existed_Raid", 00:12:20.976 "uuid": "67ab550d-5ed4-42bf-90af-d7203ab63850", 00:12:20.976 "strip_size_kb": 0, 00:12:20.976 "state": "configuring", 00:12:20.976 "raid_level": "raid1", 00:12:20.976 "superblock": true, 00:12:20.976 "num_base_bdevs": 3, 00:12:20.976 "num_base_bdevs_discovered": 2, 00:12:20.976 "num_base_bdevs_operational": 3, 00:12:20.976 "base_bdevs_list": [ 00:12:20.976 { 00:12:20.976 "name": "BaseBdev1", 00:12:20.976 "uuid": "495e58ca-ad1d-406b-8cdb-0d115544aa56", 00:12:20.976 "is_configured": true, 00:12:20.976 "data_offset": 2048, 00:12:20.976 "data_size": 63488 00:12:20.976 }, 00:12:20.976 { 00:12:20.976 "name": null, 00:12:20.976 "uuid": "18a2680a-0096-4111-bf5d-7ac71e00d37b", 00:12:20.976 "is_configured": false, 00:12:20.976 "data_offset": 0, 00:12:20.976 "data_size": 63488 00:12:20.976 }, 00:12:20.976 { 00:12:20.976 "name": "BaseBdev3", 00:12:20.976 "uuid": "6306c444-f63c-41f8-a14f-34673f453495", 00:12:20.976 "is_configured": true, 00:12:20.976 "data_offset": 2048, 00:12:20.976 "data_size": 63488 00:12:20.976 } 00:12:20.976 ] 00:12:20.976 }' 00:12:20.976 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.976 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.610 [2024-11-04 14:38:20.494287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.610 "name": "Existed_Raid", 00:12:21.610 "uuid": "67ab550d-5ed4-42bf-90af-d7203ab63850", 00:12:21.610 "strip_size_kb": 0, 00:12:21.610 "state": "configuring", 00:12:21.610 "raid_level": "raid1", 00:12:21.610 "superblock": true, 00:12:21.610 "num_base_bdevs": 3, 00:12:21.610 "num_base_bdevs_discovered": 1, 00:12:21.610 "num_base_bdevs_operational": 3, 00:12:21.610 "base_bdevs_list": [ 00:12:21.610 { 00:12:21.610 "name": null, 00:12:21.610 "uuid": "495e58ca-ad1d-406b-8cdb-0d115544aa56", 00:12:21.610 "is_configured": false, 00:12:21.610 "data_offset": 0, 00:12:21.610 "data_size": 63488 00:12:21.610 }, 00:12:21.610 { 00:12:21.610 "name": null, 00:12:21.610 "uuid": "18a2680a-0096-4111-bf5d-7ac71e00d37b", 00:12:21.610 "is_configured": false, 00:12:21.610 "data_offset": 0, 00:12:21.610 "data_size": 63488 00:12:21.610 }, 00:12:21.610 { 00:12:21.610 "name": "BaseBdev3", 00:12:21.610 "uuid": "6306c444-f63c-41f8-a14f-34673f453495", 00:12:21.610 "is_configured": true, 00:12:21.610 "data_offset": 2048, 00:12:21.610 "data_size": 63488 00:12:21.610 } 00:12:21.610 ] 00:12:21.610 }' 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.610 14:38:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.177 [2024-11-04 14:38:21.154602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.177 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.178 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.178 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.178 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.178 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.178 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.178 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.178 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.178 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.178 "name": "Existed_Raid", 00:12:22.178 "uuid": "67ab550d-5ed4-42bf-90af-d7203ab63850", 00:12:22.178 "strip_size_kb": 0, 00:12:22.178 "state": "configuring", 00:12:22.178 "raid_level": "raid1", 00:12:22.178 "superblock": true, 00:12:22.178 "num_base_bdevs": 3, 00:12:22.178 "num_base_bdevs_discovered": 2, 00:12:22.178 "num_base_bdevs_operational": 3, 00:12:22.178 "base_bdevs_list": [ 00:12:22.178 { 00:12:22.178 "name": null, 00:12:22.178 "uuid": "495e58ca-ad1d-406b-8cdb-0d115544aa56", 00:12:22.178 "is_configured": false, 00:12:22.178 "data_offset": 0, 00:12:22.178 "data_size": 63488 00:12:22.178 }, 00:12:22.178 { 00:12:22.178 "name": "BaseBdev2", 00:12:22.178 "uuid": "18a2680a-0096-4111-bf5d-7ac71e00d37b", 00:12:22.178 "is_configured": true, 00:12:22.178 "data_offset": 2048, 00:12:22.178 "data_size": 63488 00:12:22.178 }, 00:12:22.178 { 00:12:22.178 "name": "BaseBdev3", 00:12:22.178 "uuid": "6306c444-f63c-41f8-a14f-34673f453495", 00:12:22.178 "is_configured": true, 00:12:22.178 "data_offset": 2048, 00:12:22.178 "data_size": 63488 00:12:22.178 } 00:12:22.178 ] 00:12:22.178 }' 00:12:22.178 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.178 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 495e58ca-ad1d-406b-8cdb-0d115544aa56 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.745 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.745 [2024-11-04 14:38:21.833908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:22.745 [2024-11-04 14:38:21.834230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:22.745 [2024-11-04 14:38:21.834248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:22.745 NewBaseBdev 00:12:22.745 [2024-11-04 14:38:21.834550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:22.745 [2024-11-04 14:38:21.834749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:22.745 [2024-11-04 14:38:21.834771] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:22.745 [2024-11-04 14:38:21.834950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.746 [ 00:12:22.746 { 00:12:22.746 "name": "NewBaseBdev", 00:12:22.746 "aliases": [ 00:12:22.746 "495e58ca-ad1d-406b-8cdb-0d115544aa56" 00:12:22.746 ], 00:12:22.746 "product_name": "Malloc disk", 00:12:22.746 "block_size": 512, 00:12:22.746 "num_blocks": 65536, 00:12:22.746 "uuid": "495e58ca-ad1d-406b-8cdb-0d115544aa56", 00:12:22.746 "assigned_rate_limits": { 00:12:22.746 "rw_ios_per_sec": 0, 00:12:22.746 "rw_mbytes_per_sec": 0, 00:12:22.746 "r_mbytes_per_sec": 0, 00:12:22.746 "w_mbytes_per_sec": 0 00:12:22.746 }, 00:12:22.746 "claimed": true, 00:12:22.746 "claim_type": "exclusive_write", 00:12:22.746 "zoned": false, 00:12:22.746 "supported_io_types": { 00:12:22.746 "read": true, 00:12:22.746 "write": true, 00:12:22.746 "unmap": true, 00:12:22.746 "flush": true, 00:12:22.746 "reset": true, 00:12:22.746 "nvme_admin": false, 00:12:22.746 "nvme_io": false, 00:12:22.746 "nvme_io_md": false, 00:12:22.746 "write_zeroes": true, 00:12:22.746 "zcopy": true, 00:12:22.746 "get_zone_info": false, 00:12:22.746 "zone_management": false, 00:12:22.746 "zone_append": false, 00:12:22.746 "compare": false, 00:12:22.746 "compare_and_write": false, 00:12:22.746 "abort": true, 00:12:22.746 "seek_hole": false, 00:12:22.746 "seek_data": false, 00:12:22.746 "copy": true, 00:12:22.746 "nvme_iov_md": false 00:12:22.746 }, 00:12:22.746 "memory_domains": [ 00:12:22.746 { 00:12:22.746 "dma_device_id": "system", 00:12:22.746 "dma_device_type": 1 00:12:22.746 }, 00:12:22.746 { 00:12:22.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.746 "dma_device_type": 2 00:12:22.746 } 00:12:22.746 ], 00:12:22.746 "driver_specific": {} 00:12:22.746 } 00:12:22.746 ] 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.746 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.005 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.005 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.005 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.005 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.005 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.005 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.005 "name": "Existed_Raid", 00:12:23.005 "uuid": "67ab550d-5ed4-42bf-90af-d7203ab63850", 00:12:23.005 "strip_size_kb": 0, 00:12:23.005 "state": "online", 00:12:23.005 "raid_level": "raid1", 00:12:23.005 "superblock": true, 00:12:23.005 "num_base_bdevs": 3, 00:12:23.005 "num_base_bdevs_discovered": 3, 00:12:23.005 "num_base_bdevs_operational": 3, 00:12:23.005 "base_bdevs_list": [ 00:12:23.005 { 00:12:23.005 "name": "NewBaseBdev", 00:12:23.005 "uuid": "495e58ca-ad1d-406b-8cdb-0d115544aa56", 00:12:23.005 "is_configured": true, 00:12:23.005 "data_offset": 2048, 00:12:23.005 "data_size": 63488 00:12:23.005 }, 00:12:23.005 { 00:12:23.005 "name": "BaseBdev2", 00:12:23.005 "uuid": "18a2680a-0096-4111-bf5d-7ac71e00d37b", 00:12:23.005 "is_configured": true, 00:12:23.005 "data_offset": 2048, 00:12:23.005 "data_size": 63488 00:12:23.005 }, 00:12:23.005 { 00:12:23.005 "name": "BaseBdev3", 00:12:23.005 "uuid": "6306c444-f63c-41f8-a14f-34673f453495", 00:12:23.005 "is_configured": true, 00:12:23.005 "data_offset": 2048, 00:12:23.005 "data_size": 63488 00:12:23.005 } 00:12:23.005 ] 00:12:23.005 }' 00:12:23.005 14:38:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.005 14:38:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.263 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:23.263 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:23.264 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:23.264 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:23.264 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:23.264 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:23.264 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:23.264 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:23.522 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.523 [2024-11-04 14:38:22.390542] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:23.523 "name": "Existed_Raid", 00:12:23.523 "aliases": [ 00:12:23.523 "67ab550d-5ed4-42bf-90af-d7203ab63850" 00:12:23.523 ], 00:12:23.523 "product_name": "Raid Volume", 00:12:23.523 "block_size": 512, 00:12:23.523 "num_blocks": 63488, 00:12:23.523 "uuid": "67ab550d-5ed4-42bf-90af-d7203ab63850", 00:12:23.523 "assigned_rate_limits": { 00:12:23.523 "rw_ios_per_sec": 0, 00:12:23.523 "rw_mbytes_per_sec": 0, 00:12:23.523 "r_mbytes_per_sec": 0, 00:12:23.523 "w_mbytes_per_sec": 0 00:12:23.523 }, 00:12:23.523 "claimed": false, 00:12:23.523 "zoned": false, 00:12:23.523 "supported_io_types": { 00:12:23.523 "read": true, 00:12:23.523 "write": true, 00:12:23.523 "unmap": false, 00:12:23.523 "flush": false, 00:12:23.523 "reset": true, 00:12:23.523 "nvme_admin": false, 00:12:23.523 "nvme_io": false, 00:12:23.523 "nvme_io_md": false, 00:12:23.523 "write_zeroes": true, 00:12:23.523 "zcopy": false, 00:12:23.523 "get_zone_info": false, 00:12:23.523 "zone_management": false, 00:12:23.523 "zone_append": false, 00:12:23.523 "compare": false, 00:12:23.523 "compare_and_write": false, 00:12:23.523 "abort": false, 00:12:23.523 "seek_hole": false, 00:12:23.523 "seek_data": false, 00:12:23.523 "copy": false, 00:12:23.523 "nvme_iov_md": false 00:12:23.523 }, 00:12:23.523 "memory_domains": [ 00:12:23.523 { 00:12:23.523 "dma_device_id": "system", 00:12:23.523 "dma_device_type": 1 00:12:23.523 }, 00:12:23.523 { 00:12:23.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.523 "dma_device_type": 2 00:12:23.523 }, 00:12:23.523 { 00:12:23.523 "dma_device_id": "system", 00:12:23.523 "dma_device_type": 1 00:12:23.523 }, 00:12:23.523 { 00:12:23.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.523 "dma_device_type": 2 00:12:23.523 }, 00:12:23.523 { 00:12:23.523 "dma_device_id": "system", 00:12:23.523 "dma_device_type": 1 00:12:23.523 }, 00:12:23.523 { 00:12:23.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.523 "dma_device_type": 2 00:12:23.523 } 00:12:23.523 ], 00:12:23.523 "driver_specific": { 00:12:23.523 "raid": { 00:12:23.523 "uuid": "67ab550d-5ed4-42bf-90af-d7203ab63850", 00:12:23.523 "strip_size_kb": 0, 00:12:23.523 "state": "online", 00:12:23.523 "raid_level": "raid1", 00:12:23.523 "superblock": true, 00:12:23.523 "num_base_bdevs": 3, 00:12:23.523 "num_base_bdevs_discovered": 3, 00:12:23.523 "num_base_bdevs_operational": 3, 00:12:23.523 "base_bdevs_list": [ 00:12:23.523 { 00:12:23.523 "name": "NewBaseBdev", 00:12:23.523 "uuid": "495e58ca-ad1d-406b-8cdb-0d115544aa56", 00:12:23.523 "is_configured": true, 00:12:23.523 "data_offset": 2048, 00:12:23.523 "data_size": 63488 00:12:23.523 }, 00:12:23.523 { 00:12:23.523 "name": "BaseBdev2", 00:12:23.523 "uuid": "18a2680a-0096-4111-bf5d-7ac71e00d37b", 00:12:23.523 "is_configured": true, 00:12:23.523 "data_offset": 2048, 00:12:23.523 "data_size": 63488 00:12:23.523 }, 00:12:23.523 { 00:12:23.523 "name": "BaseBdev3", 00:12:23.523 "uuid": "6306c444-f63c-41f8-a14f-34673f453495", 00:12:23.523 "is_configured": true, 00:12:23.523 "data_offset": 2048, 00:12:23.523 "data_size": 63488 00:12:23.523 } 00:12:23.523 ] 00:12:23.523 } 00:12:23.523 } 00:12:23.523 }' 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:23.523 BaseBdev2 00:12:23.523 BaseBdev3' 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.523 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.781 [2024-11-04 14:38:22.710291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.781 [2024-11-04 14:38:22.710332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.781 [2024-11-04 14:38:22.710432] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.781 [2024-11-04 14:38:22.710781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.781 [2024-11-04 14:38:22.710798] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68074 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68074 ']' 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68074 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68074 00:12:23.781 killing process with pid 68074 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68074' 00:12:23.781 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68074 00:12:23.782 [2024-11-04 14:38:22.754103] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.782 14:38:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68074 00:12:24.039 [2024-11-04 14:38:23.027781] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.974 14:38:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:24.974 00:12:24.974 real 0m11.789s 00:12:24.974 user 0m19.591s 00:12:24.974 sys 0m1.609s 00:12:24.974 14:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:24.974 14:38:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.974 ************************************ 00:12:24.974 END TEST raid_state_function_test_sb 00:12:24.974 ************************************ 00:12:24.974 14:38:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:24.974 14:38:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:24.974 14:38:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:24.974 14:38:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:25.232 ************************************ 00:12:25.232 START TEST raid_superblock_test 00:12:25.232 ************************************ 00:12:25.232 14:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:12:25.232 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68705 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68705 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68705 ']' 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:25.233 14:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.233 [2024-11-04 14:38:24.213014] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:12:25.233 [2024-11-04 14:38:24.213442] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68705 ] 00:12:25.492 [2024-11-04 14:38:24.398444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.492 [2024-11-04 14:38:24.530459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.750 [2024-11-04 14:38:24.734530] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.750 [2024-11-04 14:38:24.734601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 malloc1 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 [2024-11-04 14:38:25.268571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:26.317 [2024-11-04 14:38:25.268817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.317 [2024-11-04 14:38:25.268894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:26.317 [2024-11-04 14:38:25.269136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.317 [2024-11-04 14:38:25.272138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.317 [2024-11-04 14:38:25.272302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:26.317 pt1 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 malloc2 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 [2024-11-04 14:38:25.324481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.317 [2024-11-04 14:38:25.324567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.317 [2024-11-04 14:38:25.324599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:26.317 [2024-11-04 14:38:25.324614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.317 [2024-11-04 14:38:25.327442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.317 [2024-11-04 14:38:25.327486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.317 pt2 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 malloc3 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.317 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.317 [2024-11-04 14:38:25.391166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:26.317 [2024-11-04 14:38:25.391249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.317 [2024-11-04 14:38:25.391283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:26.317 [2024-11-04 14:38:25.391299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.318 [2024-11-04 14:38:25.394115] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.318 [2024-11-04 14:38:25.394290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:26.318 pt3 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.318 [2024-11-04 14:38:25.403260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:26.318 [2024-11-04 14:38:25.405651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.318 [2024-11-04 14:38:25.405745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.318 [2024-11-04 14:38:25.405998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:26.318 [2024-11-04 14:38:25.406027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.318 [2024-11-04 14:38:25.406328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:26.318 [2024-11-04 14:38:25.406550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:26.318 [2024-11-04 14:38:25.406570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:26.318 [2024-11-04 14:38:25.406760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.318 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.576 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.576 "name": "raid_bdev1", 00:12:26.576 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:26.576 "strip_size_kb": 0, 00:12:26.576 "state": "online", 00:12:26.576 "raid_level": "raid1", 00:12:26.576 "superblock": true, 00:12:26.576 "num_base_bdevs": 3, 00:12:26.576 "num_base_bdevs_discovered": 3, 00:12:26.576 "num_base_bdevs_operational": 3, 00:12:26.576 "base_bdevs_list": [ 00:12:26.576 { 00:12:26.576 "name": "pt1", 00:12:26.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.576 "is_configured": true, 00:12:26.576 "data_offset": 2048, 00:12:26.576 "data_size": 63488 00:12:26.576 }, 00:12:26.576 { 00:12:26.576 "name": "pt2", 00:12:26.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.576 "is_configured": true, 00:12:26.576 "data_offset": 2048, 00:12:26.576 "data_size": 63488 00:12:26.576 }, 00:12:26.576 { 00:12:26.576 "name": "pt3", 00:12:26.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.576 "is_configured": true, 00:12:26.576 "data_offset": 2048, 00:12:26.576 "data_size": 63488 00:12:26.576 } 00:12:26.576 ] 00:12:26.576 }' 00:12:26.576 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.576 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.835 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:26.835 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:26.835 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:26.835 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:26.835 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:26.835 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:26.835 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:26.835 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:26.835 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.835 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.835 [2024-11-04 14:38:25.939754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.094 14:38:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.094 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:27.094 "name": "raid_bdev1", 00:12:27.094 "aliases": [ 00:12:27.094 "71233d12-3fa7-4d1c-9221-d757f0f5c9f9" 00:12:27.094 ], 00:12:27.094 "product_name": "Raid Volume", 00:12:27.094 "block_size": 512, 00:12:27.094 "num_blocks": 63488, 00:12:27.094 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:27.094 "assigned_rate_limits": { 00:12:27.094 "rw_ios_per_sec": 0, 00:12:27.094 "rw_mbytes_per_sec": 0, 00:12:27.094 "r_mbytes_per_sec": 0, 00:12:27.094 "w_mbytes_per_sec": 0 00:12:27.094 }, 00:12:27.094 "claimed": false, 00:12:27.094 "zoned": false, 00:12:27.094 "supported_io_types": { 00:12:27.094 "read": true, 00:12:27.094 "write": true, 00:12:27.094 "unmap": false, 00:12:27.094 "flush": false, 00:12:27.094 "reset": true, 00:12:27.094 "nvme_admin": false, 00:12:27.094 "nvme_io": false, 00:12:27.094 "nvme_io_md": false, 00:12:27.094 "write_zeroes": true, 00:12:27.094 "zcopy": false, 00:12:27.094 "get_zone_info": false, 00:12:27.094 "zone_management": false, 00:12:27.094 "zone_append": false, 00:12:27.094 "compare": false, 00:12:27.094 "compare_and_write": false, 00:12:27.094 "abort": false, 00:12:27.094 "seek_hole": false, 00:12:27.094 "seek_data": false, 00:12:27.094 "copy": false, 00:12:27.094 "nvme_iov_md": false 00:12:27.094 }, 00:12:27.094 "memory_domains": [ 00:12:27.094 { 00:12:27.094 "dma_device_id": "system", 00:12:27.094 "dma_device_type": 1 00:12:27.094 }, 00:12:27.094 { 00:12:27.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.094 "dma_device_type": 2 00:12:27.094 }, 00:12:27.094 { 00:12:27.094 "dma_device_id": "system", 00:12:27.094 "dma_device_type": 1 00:12:27.094 }, 00:12:27.094 { 00:12:27.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.094 "dma_device_type": 2 00:12:27.094 }, 00:12:27.094 { 00:12:27.094 "dma_device_id": "system", 00:12:27.094 "dma_device_type": 1 00:12:27.094 }, 00:12:27.094 { 00:12:27.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.094 "dma_device_type": 2 00:12:27.094 } 00:12:27.094 ], 00:12:27.094 "driver_specific": { 00:12:27.094 "raid": { 00:12:27.094 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:27.094 "strip_size_kb": 0, 00:12:27.094 "state": "online", 00:12:27.094 "raid_level": "raid1", 00:12:27.094 "superblock": true, 00:12:27.094 "num_base_bdevs": 3, 00:12:27.094 "num_base_bdevs_discovered": 3, 00:12:27.094 "num_base_bdevs_operational": 3, 00:12:27.094 "base_bdevs_list": [ 00:12:27.094 { 00:12:27.094 "name": "pt1", 00:12:27.094 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.094 "is_configured": true, 00:12:27.094 "data_offset": 2048, 00:12:27.094 "data_size": 63488 00:12:27.094 }, 00:12:27.094 { 00:12:27.094 "name": "pt2", 00:12:27.094 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.094 "is_configured": true, 00:12:27.094 "data_offset": 2048, 00:12:27.094 "data_size": 63488 00:12:27.094 }, 00:12:27.094 { 00:12:27.094 "name": "pt3", 00:12:27.094 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.094 "is_configured": true, 00:12:27.094 "data_offset": 2048, 00:12:27.094 "data_size": 63488 00:12:27.094 } 00:12:27.094 ] 00:12:27.094 } 00:12:27.094 } 00:12:27.094 }' 00:12:27.094 14:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:27.094 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:27.094 pt2 00:12:27.095 pt3' 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.095 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.354 [2024-11-04 14:38:26.263801] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=71233d12-3fa7-4d1c-9221-d757f0f5c9f9 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 71233d12-3fa7-4d1c-9221-d757f0f5c9f9 ']' 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.354 [2024-11-04 14:38:26.307453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.354 [2024-11-04 14:38:26.307485] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.354 [2024-11-04 14:38:26.307578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.354 [2024-11-04 14:38:26.307684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.354 [2024-11-04 14:38:26.307700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:27.354 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.355 [2024-11-04 14:38:26.447604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:27.355 [2024-11-04 14:38:26.450139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:27.355 [2024-11-04 14:38:26.450358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:27.355 [2024-11-04 14:38:26.450443] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:27.355 [2024-11-04 14:38:26.450523] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:27.355 [2024-11-04 14:38:26.450558] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:27.355 [2024-11-04 14:38:26.450586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.355 [2024-11-04 14:38:26.450599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:27.355 request: 00:12:27.355 { 00:12:27.355 "name": "raid_bdev1", 00:12:27.355 "raid_level": "raid1", 00:12:27.355 "base_bdevs": [ 00:12:27.355 "malloc1", 00:12:27.355 "malloc2", 00:12:27.355 "malloc3" 00:12:27.355 ], 00:12:27.355 "superblock": false, 00:12:27.355 "method": "bdev_raid_create", 00:12:27.355 "req_id": 1 00:12:27.355 } 00:12:27.355 Got JSON-RPC error response 00:12:27.355 response: 00:12:27.355 { 00:12:27.355 "code": -17, 00:12:27.355 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:27.355 } 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.355 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.632 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.633 [2024-11-04 14:38:26.515564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:27.633 [2024-11-04 14:38:26.515782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.633 [2024-11-04 14:38:26.515860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:27.633 [2024-11-04 14:38:26.515992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.633 [2024-11-04 14:38:26.518878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.633 [2024-11-04 14:38:26.519050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:27.633 [2024-11-04 14:38:26.519264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:27.633 [2024-11-04 14:38:26.519434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:27.633 pt1 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.633 "name": "raid_bdev1", 00:12:27.633 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:27.633 "strip_size_kb": 0, 00:12:27.633 "state": "configuring", 00:12:27.633 "raid_level": "raid1", 00:12:27.633 "superblock": true, 00:12:27.633 "num_base_bdevs": 3, 00:12:27.633 "num_base_bdevs_discovered": 1, 00:12:27.633 "num_base_bdevs_operational": 3, 00:12:27.633 "base_bdevs_list": [ 00:12:27.633 { 00:12:27.633 "name": "pt1", 00:12:27.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.633 "is_configured": true, 00:12:27.633 "data_offset": 2048, 00:12:27.633 "data_size": 63488 00:12:27.633 }, 00:12:27.633 { 00:12:27.633 "name": null, 00:12:27.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.633 "is_configured": false, 00:12:27.633 "data_offset": 2048, 00:12:27.633 "data_size": 63488 00:12:27.633 }, 00:12:27.633 { 00:12:27.633 "name": null, 00:12:27.633 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.633 "is_configured": false, 00:12:27.633 "data_offset": 2048, 00:12:27.633 "data_size": 63488 00:12:27.633 } 00:12:27.633 ] 00:12:27.633 }' 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.633 14:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.921 [2024-11-04 14:38:27.015910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.921 [2024-11-04 14:38:27.016002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.921 [2024-11-04 14:38:27.016038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:27.921 [2024-11-04 14:38:27.016052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.921 [2024-11-04 14:38:27.016624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.921 [2024-11-04 14:38:27.016651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.921 [2024-11-04 14:38:27.016759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.921 [2024-11-04 14:38:27.016796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.921 pt2 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.921 [2024-11-04 14:38:27.023898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.921 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.180 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.180 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.180 "name": "raid_bdev1", 00:12:28.180 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:28.180 "strip_size_kb": 0, 00:12:28.180 "state": "configuring", 00:12:28.180 "raid_level": "raid1", 00:12:28.180 "superblock": true, 00:12:28.180 "num_base_bdevs": 3, 00:12:28.180 "num_base_bdevs_discovered": 1, 00:12:28.180 "num_base_bdevs_operational": 3, 00:12:28.180 "base_bdevs_list": [ 00:12:28.180 { 00:12:28.180 "name": "pt1", 00:12:28.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.180 "is_configured": true, 00:12:28.180 "data_offset": 2048, 00:12:28.180 "data_size": 63488 00:12:28.180 }, 00:12:28.180 { 00:12:28.180 "name": null, 00:12:28.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.180 "is_configured": false, 00:12:28.180 "data_offset": 0, 00:12:28.180 "data_size": 63488 00:12:28.180 }, 00:12:28.180 { 00:12:28.180 "name": null, 00:12:28.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.180 "is_configured": false, 00:12:28.180 "data_offset": 2048, 00:12:28.180 "data_size": 63488 00:12:28.180 } 00:12:28.180 ] 00:12:28.180 }' 00:12:28.180 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.180 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.440 [2024-11-04 14:38:27.552061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:28.440 [2024-11-04 14:38:27.552143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.440 [2024-11-04 14:38:27.552172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:28.440 [2024-11-04 14:38:27.552189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.440 [2024-11-04 14:38:27.552754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.440 [2024-11-04 14:38:27.552785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:28.440 [2024-11-04 14:38:27.552880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:28.440 [2024-11-04 14:38:27.552930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.440 pt2 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.440 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.440 [2024-11-04 14:38:27.559994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:28.440 [2024-11-04 14:38:27.560045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.440 [2024-11-04 14:38:27.560081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:28.440 [2024-11-04 14:38:27.560100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.440 [2024-11-04 14:38:27.560531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.440 [2024-11-04 14:38:27.560581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:28.440 [2024-11-04 14:38:27.560665] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:28.440 [2024-11-04 14:38:27.560696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:28.440 [2024-11-04 14:38:27.560849] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:28.440 [2024-11-04 14:38:27.560872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:28.440 [2024-11-04 14:38:27.561204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:28.698 [2024-11-04 14:38:27.561400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:28.698 [2024-11-04 14:38:27.561423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:28.698 [2024-11-04 14:38:27.561593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.698 pt3 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.698 "name": "raid_bdev1", 00:12:28.698 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:28.698 "strip_size_kb": 0, 00:12:28.698 "state": "online", 00:12:28.698 "raid_level": "raid1", 00:12:28.698 "superblock": true, 00:12:28.698 "num_base_bdevs": 3, 00:12:28.698 "num_base_bdevs_discovered": 3, 00:12:28.698 "num_base_bdevs_operational": 3, 00:12:28.698 "base_bdevs_list": [ 00:12:28.698 { 00:12:28.698 "name": "pt1", 00:12:28.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.698 "is_configured": true, 00:12:28.698 "data_offset": 2048, 00:12:28.698 "data_size": 63488 00:12:28.698 }, 00:12:28.698 { 00:12:28.698 "name": "pt2", 00:12:28.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.698 "is_configured": true, 00:12:28.698 "data_offset": 2048, 00:12:28.698 "data_size": 63488 00:12:28.698 }, 00:12:28.698 { 00:12:28.698 "name": "pt3", 00:12:28.698 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.698 "is_configured": true, 00:12:28.698 "data_offset": 2048, 00:12:28.698 "data_size": 63488 00:12:28.698 } 00:12:28.698 ] 00:12:28.698 }' 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.698 14:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.268 [2024-11-04 14:38:28.104620] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:29.268 "name": "raid_bdev1", 00:12:29.268 "aliases": [ 00:12:29.268 "71233d12-3fa7-4d1c-9221-d757f0f5c9f9" 00:12:29.268 ], 00:12:29.268 "product_name": "Raid Volume", 00:12:29.268 "block_size": 512, 00:12:29.268 "num_blocks": 63488, 00:12:29.268 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:29.268 "assigned_rate_limits": { 00:12:29.268 "rw_ios_per_sec": 0, 00:12:29.268 "rw_mbytes_per_sec": 0, 00:12:29.268 "r_mbytes_per_sec": 0, 00:12:29.268 "w_mbytes_per_sec": 0 00:12:29.268 }, 00:12:29.268 "claimed": false, 00:12:29.268 "zoned": false, 00:12:29.268 "supported_io_types": { 00:12:29.268 "read": true, 00:12:29.268 "write": true, 00:12:29.268 "unmap": false, 00:12:29.268 "flush": false, 00:12:29.268 "reset": true, 00:12:29.268 "nvme_admin": false, 00:12:29.268 "nvme_io": false, 00:12:29.268 "nvme_io_md": false, 00:12:29.268 "write_zeroes": true, 00:12:29.268 "zcopy": false, 00:12:29.268 "get_zone_info": false, 00:12:29.268 "zone_management": false, 00:12:29.268 "zone_append": false, 00:12:29.268 "compare": false, 00:12:29.268 "compare_and_write": false, 00:12:29.268 "abort": false, 00:12:29.268 "seek_hole": false, 00:12:29.268 "seek_data": false, 00:12:29.268 "copy": false, 00:12:29.268 "nvme_iov_md": false 00:12:29.268 }, 00:12:29.268 "memory_domains": [ 00:12:29.268 { 00:12:29.268 "dma_device_id": "system", 00:12:29.268 "dma_device_type": 1 00:12:29.268 }, 00:12:29.268 { 00:12:29.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.268 "dma_device_type": 2 00:12:29.268 }, 00:12:29.268 { 00:12:29.268 "dma_device_id": "system", 00:12:29.268 "dma_device_type": 1 00:12:29.268 }, 00:12:29.268 { 00:12:29.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.268 "dma_device_type": 2 00:12:29.268 }, 00:12:29.268 { 00:12:29.268 "dma_device_id": "system", 00:12:29.268 "dma_device_type": 1 00:12:29.268 }, 00:12:29.268 { 00:12:29.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.268 "dma_device_type": 2 00:12:29.268 } 00:12:29.268 ], 00:12:29.268 "driver_specific": { 00:12:29.268 "raid": { 00:12:29.268 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:29.268 "strip_size_kb": 0, 00:12:29.268 "state": "online", 00:12:29.268 "raid_level": "raid1", 00:12:29.268 "superblock": true, 00:12:29.268 "num_base_bdevs": 3, 00:12:29.268 "num_base_bdevs_discovered": 3, 00:12:29.268 "num_base_bdevs_operational": 3, 00:12:29.268 "base_bdevs_list": [ 00:12:29.268 { 00:12:29.268 "name": "pt1", 00:12:29.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:29.268 "is_configured": true, 00:12:29.268 "data_offset": 2048, 00:12:29.268 "data_size": 63488 00:12:29.268 }, 00:12:29.268 { 00:12:29.268 "name": "pt2", 00:12:29.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.268 "is_configured": true, 00:12:29.268 "data_offset": 2048, 00:12:29.268 "data_size": 63488 00:12:29.268 }, 00:12:29.268 { 00:12:29.268 "name": "pt3", 00:12:29.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.268 "is_configured": true, 00:12:29.268 "data_offset": 2048, 00:12:29.268 "data_size": 63488 00:12:29.268 } 00:12:29.268 ] 00:12:29.268 } 00:12:29.268 } 00:12:29.268 }' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:29.268 pt2 00:12:29.268 pt3' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.268 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.527 [2024-11-04 14:38:28.400659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 71233d12-3fa7-4d1c-9221-d757f0f5c9f9 '!=' 71233d12-3fa7-4d1c-9221-d757f0f5c9f9 ']' 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.527 [2024-11-04 14:38:28.448439] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.527 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.528 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.528 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.528 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.528 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.528 "name": "raid_bdev1", 00:12:29.528 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:29.528 "strip_size_kb": 0, 00:12:29.528 "state": "online", 00:12:29.528 "raid_level": "raid1", 00:12:29.528 "superblock": true, 00:12:29.528 "num_base_bdevs": 3, 00:12:29.528 "num_base_bdevs_discovered": 2, 00:12:29.528 "num_base_bdevs_operational": 2, 00:12:29.528 "base_bdevs_list": [ 00:12:29.528 { 00:12:29.528 "name": null, 00:12:29.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.528 "is_configured": false, 00:12:29.528 "data_offset": 0, 00:12:29.528 "data_size": 63488 00:12:29.528 }, 00:12:29.528 { 00:12:29.528 "name": "pt2", 00:12:29.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.528 "is_configured": true, 00:12:29.528 "data_offset": 2048, 00:12:29.528 "data_size": 63488 00:12:29.528 }, 00:12:29.528 { 00:12:29.528 "name": "pt3", 00:12:29.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.528 "is_configured": true, 00:12:29.528 "data_offset": 2048, 00:12:29.528 "data_size": 63488 00:12:29.528 } 00:12:29.528 ] 00:12:29.528 }' 00:12:29.528 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.528 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.095 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.095 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.095 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.095 [2024-11-04 14:38:28.984518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.095 [2024-11-04 14:38:28.984552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.095 [2024-11-04 14:38:28.984662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.095 [2024-11-04 14:38:28.984738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.095 [2024-11-04 14:38:28.984761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:30.095 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.095 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.095 14:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:30.095 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.095 14:38:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.095 [2024-11-04 14:38:29.064479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:30.095 [2024-11-04 14:38:29.064563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.095 [2024-11-04 14:38:29.064588] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:30.095 [2024-11-04 14:38:29.064604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.095 [2024-11-04 14:38:29.067617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.095 [2024-11-04 14:38:29.067666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:30.095 [2024-11-04 14:38:29.067781] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:30.095 [2024-11-04 14:38:29.067844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:30.095 pt2 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:30.095 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.096 "name": "raid_bdev1", 00:12:30.096 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:30.096 "strip_size_kb": 0, 00:12:30.096 "state": "configuring", 00:12:30.096 "raid_level": "raid1", 00:12:30.096 "superblock": true, 00:12:30.096 "num_base_bdevs": 3, 00:12:30.096 "num_base_bdevs_discovered": 1, 00:12:30.096 "num_base_bdevs_operational": 2, 00:12:30.096 "base_bdevs_list": [ 00:12:30.096 { 00:12:30.096 "name": null, 00:12:30.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.096 "is_configured": false, 00:12:30.096 "data_offset": 2048, 00:12:30.096 "data_size": 63488 00:12:30.096 }, 00:12:30.096 { 00:12:30.096 "name": "pt2", 00:12:30.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.096 "is_configured": true, 00:12:30.096 "data_offset": 2048, 00:12:30.096 "data_size": 63488 00:12:30.096 }, 00:12:30.096 { 00:12:30.096 "name": null, 00:12:30.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.096 "is_configured": false, 00:12:30.096 "data_offset": 2048, 00:12:30.096 "data_size": 63488 00:12:30.096 } 00:12:30.096 ] 00:12:30.096 }' 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.096 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 [2024-11-04 14:38:29.584687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:30.663 [2024-11-04 14:38:29.584780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.663 [2024-11-04 14:38:29.584810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:30.663 [2024-11-04 14:38:29.584827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.663 [2024-11-04 14:38:29.585428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.663 [2024-11-04 14:38:29.585476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:30.663 [2024-11-04 14:38:29.585591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:30.663 [2024-11-04 14:38:29.585631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:30.663 [2024-11-04 14:38:29.585776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:30.663 [2024-11-04 14:38:29.585803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:30.663 [2024-11-04 14:38:29.586150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:30.663 [2024-11-04 14:38:29.586488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:30.663 [2024-11-04 14:38:29.586512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:30.663 [2024-11-04 14:38:29.586685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.663 pt3 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.663 "name": "raid_bdev1", 00:12:30.663 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:30.663 "strip_size_kb": 0, 00:12:30.663 "state": "online", 00:12:30.663 "raid_level": "raid1", 00:12:30.663 "superblock": true, 00:12:30.663 "num_base_bdevs": 3, 00:12:30.663 "num_base_bdevs_discovered": 2, 00:12:30.663 "num_base_bdevs_operational": 2, 00:12:30.663 "base_bdevs_list": [ 00:12:30.663 { 00:12:30.663 "name": null, 00:12:30.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.663 "is_configured": false, 00:12:30.663 "data_offset": 2048, 00:12:30.663 "data_size": 63488 00:12:30.663 }, 00:12:30.663 { 00:12:30.663 "name": "pt2", 00:12:30.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.663 "is_configured": true, 00:12:30.663 "data_offset": 2048, 00:12:30.663 "data_size": 63488 00:12:30.663 }, 00:12:30.663 { 00:12:30.663 "name": "pt3", 00:12:30.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.663 "is_configured": true, 00:12:30.663 "data_offset": 2048, 00:12:30.663 "data_size": 63488 00:12:30.663 } 00:12:30.663 ] 00:12:30.663 }' 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.663 14:38:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.231 [2024-11-04 14:38:30.140829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:31.231 [2024-11-04 14:38:30.140878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.231 [2024-11-04 14:38:30.140983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.231 [2024-11-04 14:38:30.141068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.231 [2024-11-04 14:38:30.141083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.231 [2024-11-04 14:38:30.212851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:31.231 [2024-11-04 14:38:30.212934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.231 [2024-11-04 14:38:30.212994] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:31.231 [2024-11-04 14:38:30.213018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.231 [2024-11-04 14:38:30.215981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.231 [2024-11-04 14:38:30.216024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:31.231 [2024-11-04 14:38:30.216129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:31.231 [2024-11-04 14:38:30.216190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:31.231 [2024-11-04 14:38:30.216377] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:31.231 [2024-11-04 14:38:30.216395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:31.231 [2024-11-04 14:38:30.216417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:31.231 [2024-11-04 14:38:30.216503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.231 pt1 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.231 "name": "raid_bdev1", 00:12:31.231 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:31.231 "strip_size_kb": 0, 00:12:31.231 "state": "configuring", 00:12:31.231 "raid_level": "raid1", 00:12:31.231 "superblock": true, 00:12:31.231 "num_base_bdevs": 3, 00:12:31.231 "num_base_bdevs_discovered": 1, 00:12:31.231 "num_base_bdevs_operational": 2, 00:12:31.231 "base_bdevs_list": [ 00:12:31.231 { 00:12:31.231 "name": null, 00:12:31.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.231 "is_configured": false, 00:12:31.231 "data_offset": 2048, 00:12:31.231 "data_size": 63488 00:12:31.231 }, 00:12:31.231 { 00:12:31.231 "name": "pt2", 00:12:31.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.231 "is_configured": true, 00:12:31.231 "data_offset": 2048, 00:12:31.231 "data_size": 63488 00:12:31.231 }, 00:12:31.231 { 00:12:31.231 "name": null, 00:12:31.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.231 "is_configured": false, 00:12:31.231 "data_offset": 2048, 00:12:31.231 "data_size": 63488 00:12:31.231 } 00:12:31.231 ] 00:12:31.231 }' 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.231 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.799 [2024-11-04 14:38:30.817140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:31.799 [2024-11-04 14:38:30.817214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.799 [2024-11-04 14:38:30.817246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:31.799 [2024-11-04 14:38:30.817261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.799 [2024-11-04 14:38:30.817813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.799 [2024-11-04 14:38:30.817843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:31.799 [2024-11-04 14:38:30.818006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:31.799 [2024-11-04 14:38:30.818072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:31.799 [2024-11-04 14:38:30.818232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:31.799 [2024-11-04 14:38:30.818247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.799 [2024-11-04 14:38:30.818560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:31.799 [2024-11-04 14:38:30.818912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:31.799 [2024-11-04 14:38:30.818961] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:31.799 [2024-11-04 14:38:30.819133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.799 pt3 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.799 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.799 "name": "raid_bdev1", 00:12:31.799 "uuid": "71233d12-3fa7-4d1c-9221-d757f0f5c9f9", 00:12:31.799 "strip_size_kb": 0, 00:12:31.799 "state": "online", 00:12:31.799 "raid_level": "raid1", 00:12:31.799 "superblock": true, 00:12:31.799 "num_base_bdevs": 3, 00:12:31.799 "num_base_bdevs_discovered": 2, 00:12:31.799 "num_base_bdevs_operational": 2, 00:12:31.799 "base_bdevs_list": [ 00:12:31.799 { 00:12:31.799 "name": null, 00:12:31.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.799 "is_configured": false, 00:12:31.799 "data_offset": 2048, 00:12:31.799 "data_size": 63488 00:12:31.799 }, 00:12:31.799 { 00:12:31.799 "name": "pt2", 00:12:31.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.799 "is_configured": true, 00:12:31.799 "data_offset": 2048, 00:12:31.799 "data_size": 63488 00:12:31.799 }, 00:12:31.799 { 00:12:31.799 "name": "pt3", 00:12:31.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:31.799 "is_configured": true, 00:12:31.799 "data_offset": 2048, 00:12:31.800 "data_size": 63488 00:12:31.800 } 00:12:31.800 ] 00:12:31.800 }' 00:12:31.800 14:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.800 14:38:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:32.368 [2024-11-04 14:38:31.409618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 71233d12-3fa7-4d1c-9221-d757f0f5c9f9 '!=' 71233d12-3fa7-4d1c-9221-d757f0f5c9f9 ']' 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68705 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68705 ']' 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68705 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68705 00:12:32.368 killing process with pid 68705 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68705' 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68705 00:12:32.368 [2024-11-04 14:38:31.486607] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.368 14:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68705 00:12:32.368 [2024-11-04 14:38:31.486719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.368 [2024-11-04 14:38:31.486799] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.368 [2024-11-04 14:38:31.486818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:32.936 [2024-11-04 14:38:31.760384] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.882 14:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:33.882 00:12:33.882 real 0m8.697s 00:12:33.882 user 0m14.311s 00:12:33.882 sys 0m1.185s 00:12:33.882 14:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:33.882 14:38:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.882 ************************************ 00:12:33.882 END TEST raid_superblock_test 00:12:33.882 ************************************ 00:12:33.882 14:38:32 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:33.882 14:38:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:33.882 14:38:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:33.882 14:38:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.882 ************************************ 00:12:33.882 START TEST raid_read_error_test 00:12:33.882 ************************************ 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.prySEBNlWS 00:12:33.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69162 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69162 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69162 ']' 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:33.882 14:38:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.882 [2024-11-04 14:38:33.000677] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:12:33.882 [2024-11-04 14:38:33.001109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69162 ] 00:12:34.141 [2024-11-04 14:38:33.189410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.400 [2024-11-04 14:38:33.350471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.658 [2024-11-04 14:38:33.572842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.658 [2024-11-04 14:38:33.572883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.917 14:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:34.917 14:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:34.917 14:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:34.917 14:38:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:34.917 14:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.917 14:38:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.917 BaseBdev1_malloc 00:12:34.917 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.917 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:34.917 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.917 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.917 true 00:12:34.917 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.917 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:34.917 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.917 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.917 [2024-11-04 14:38:34.033251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:34.917 [2024-11-04 14:38:34.033335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.917 [2024-11-04 14:38:34.033365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:34.917 [2024-11-04 14:38:34.033381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.917 [2024-11-04 14:38:34.036222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.917 [2024-11-04 14:38:34.036276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.176 BaseBdev1 00:12:35.176 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.176 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 BaseBdev2_malloc 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 true 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 [2024-11-04 14:38:34.098603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:35.177 [2024-11-04 14:38:34.098811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.177 [2024-11-04 14:38:34.098885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:35.177 [2024-11-04 14:38:34.099061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.177 [2024-11-04 14:38:34.102199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.177 [2024-11-04 14:38:34.102251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:35.177 BaseBdev2 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 BaseBdev3_malloc 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 true 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 [2024-11-04 14:38:34.174722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:35.177 [2024-11-04 14:38:34.174951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.177 [2024-11-04 14:38:34.175026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:35.177 [2024-11-04 14:38:34.175145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.177 [2024-11-04 14:38:34.178095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.177 [2024-11-04 14:38:34.178272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:35.177 BaseBdev3 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 [2024-11-04 14:38:34.187000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.177 [2024-11-04 14:38:34.189634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:35.177 [2024-11-04 14:38:34.189743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:35.177 [2024-11-04 14:38:34.190101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:35.177 [2024-11-04 14:38:34.190122] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:35.177 [2024-11-04 14:38:34.190479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:35.177 [2024-11-04 14:38:34.190724] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:35.177 [2024-11-04 14:38:34.190745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:35.177 [2024-11-04 14:38:34.191004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.177 "name": "raid_bdev1", 00:12:35.177 "uuid": "734efab9-d35c-4413-8301-9d19b0f1c8f8", 00:12:35.177 "strip_size_kb": 0, 00:12:35.177 "state": "online", 00:12:35.177 "raid_level": "raid1", 00:12:35.177 "superblock": true, 00:12:35.177 "num_base_bdevs": 3, 00:12:35.177 "num_base_bdevs_discovered": 3, 00:12:35.177 "num_base_bdevs_operational": 3, 00:12:35.177 "base_bdevs_list": [ 00:12:35.177 { 00:12:35.177 "name": "BaseBdev1", 00:12:35.177 "uuid": "96a716a5-3af9-516b-8ebf-c194e6c9c79d", 00:12:35.177 "is_configured": true, 00:12:35.177 "data_offset": 2048, 00:12:35.177 "data_size": 63488 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "name": "BaseBdev2", 00:12:35.177 "uuid": "0e62e789-3c25-5f61-b387-ec8f147d9d29", 00:12:35.177 "is_configured": true, 00:12:35.177 "data_offset": 2048, 00:12:35.177 "data_size": 63488 00:12:35.177 }, 00:12:35.177 { 00:12:35.177 "name": "BaseBdev3", 00:12:35.177 "uuid": "4fe39b96-d8d3-5329-a246-0f2939aac548", 00:12:35.177 "is_configured": true, 00:12:35.177 "data_offset": 2048, 00:12:35.177 "data_size": 63488 00:12:35.177 } 00:12:35.177 ] 00:12:35.177 }' 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.177 14:38:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.745 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:35.745 14:38:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:35.745 [2024-11-04 14:38:34.860552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.680 14:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.939 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.939 "name": "raid_bdev1", 00:12:36.939 "uuid": "734efab9-d35c-4413-8301-9d19b0f1c8f8", 00:12:36.939 "strip_size_kb": 0, 00:12:36.939 "state": "online", 00:12:36.939 "raid_level": "raid1", 00:12:36.939 "superblock": true, 00:12:36.939 "num_base_bdevs": 3, 00:12:36.939 "num_base_bdevs_discovered": 3, 00:12:36.939 "num_base_bdevs_operational": 3, 00:12:36.939 "base_bdevs_list": [ 00:12:36.939 { 00:12:36.939 "name": "BaseBdev1", 00:12:36.939 "uuid": "96a716a5-3af9-516b-8ebf-c194e6c9c79d", 00:12:36.939 "is_configured": true, 00:12:36.939 "data_offset": 2048, 00:12:36.939 "data_size": 63488 00:12:36.939 }, 00:12:36.939 { 00:12:36.939 "name": "BaseBdev2", 00:12:36.939 "uuid": "0e62e789-3c25-5f61-b387-ec8f147d9d29", 00:12:36.939 "is_configured": true, 00:12:36.939 "data_offset": 2048, 00:12:36.939 "data_size": 63488 00:12:36.939 }, 00:12:36.939 { 00:12:36.939 "name": "BaseBdev3", 00:12:36.939 "uuid": "4fe39b96-d8d3-5329-a246-0f2939aac548", 00:12:36.939 "is_configured": true, 00:12:36.939 "data_offset": 2048, 00:12:36.939 "data_size": 63488 00:12:36.939 } 00:12:36.939 ] 00:12:36.939 }' 00:12:36.939 14:38:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.939 14:38:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.197 14:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:37.197 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.197 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.197 [2024-11-04 14:38:36.296132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.197 [2024-11-04 14:38:36.296166] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.197 [2024-11-04 14:38:36.299659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.197 [2024-11-04 14:38:36.299721] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.197 [2024-11-04 14:38:36.299870] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.197 [2024-11-04 14:38:36.299887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:37.197 { 00:12:37.197 "results": [ 00:12:37.197 { 00:12:37.197 "job": "raid_bdev1", 00:12:37.197 "core_mask": "0x1", 00:12:37.197 "workload": "randrw", 00:12:37.197 "percentage": 50, 00:12:37.197 "status": "finished", 00:12:37.197 "queue_depth": 1, 00:12:37.197 "io_size": 131072, 00:12:37.197 "runtime": 1.432959, 00:12:37.197 "iops": 9225.66521442693, 00:12:37.197 "mibps": 1153.2081518033663, 00:12:37.197 "io_failed": 0, 00:12:37.197 "io_timeout": 0, 00:12:37.197 "avg_latency_us": 104.11954119103287, 00:12:37.197 "min_latency_us": 41.192727272727275, 00:12:37.197 "max_latency_us": 1951.1854545454546 00:12:37.197 } 00:12:37.197 ], 00:12:37.197 "core_count": 1 00:12:37.197 } 00:12:37.197 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.197 14:38:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69162 00:12:37.197 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69162 ']' 00:12:37.197 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69162 00:12:37.197 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:37.197 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:37.197 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69162 00:12:37.456 killing process with pid 69162 00:12:37.456 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:37.456 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:37.456 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69162' 00:12:37.456 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69162 00:12:37.456 [2024-11-04 14:38:36.334923] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.456 14:38:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69162 00:12:37.456 [2024-11-04 14:38:36.543159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.832 14:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.prySEBNlWS 00:12:38.832 14:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:38.832 14:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:38.832 14:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:38.832 14:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:38.832 14:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:38.832 14:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:38.832 ************************************ 00:12:38.832 END TEST raid_read_error_test 00:12:38.832 ************************************ 00:12:38.832 14:38:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:38.832 00:12:38.832 real 0m4.787s 00:12:38.832 user 0m5.966s 00:12:38.832 sys 0m0.607s 00:12:38.832 14:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:38.832 14:38:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.832 14:38:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:38.832 14:38:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:38.832 14:38:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:38.832 14:38:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.832 ************************************ 00:12:38.832 START TEST raid_write_error_test 00:12:38.832 ************************************ 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.jFzUQuX5iz 00:12:38.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69302 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69302 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69302 ']' 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:38.832 14:38:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.832 [2024-11-04 14:38:37.800449] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:12:38.832 [2024-11-04 14:38:37.800806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69302 ] 00:12:39.091 [2024-11-04 14:38:37.968253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.091 [2024-11-04 14:38:38.098991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.349 [2024-11-04 14:38:38.308404] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.349 [2024-11-04 14:38:38.308458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 BaseBdev1_malloc 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 true 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 [2024-11-04 14:38:38.843532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:39.918 [2024-11-04 14:38:38.843619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.918 [2024-11-04 14:38:38.843648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:39.918 [2024-11-04 14:38:38.843667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.918 [2024-11-04 14:38:38.846566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.918 [2024-11-04 14:38:38.846773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:39.918 BaseBdev1 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 BaseBdev2_malloc 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 true 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 [2024-11-04 14:38:38.908674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:39.918 [2024-11-04 14:38:38.908745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.918 [2024-11-04 14:38:38.908772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:39.918 [2024-11-04 14:38:38.908789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.918 [2024-11-04 14:38:38.911628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.918 [2024-11-04 14:38:38.911679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:39.918 BaseBdev2 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 BaseBdev3_malloc 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 true 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.918 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.918 [2024-11-04 14:38:38.979661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:39.918 [2024-11-04 14:38:38.979739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.918 [2024-11-04 14:38:38.979767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:39.918 [2024-11-04 14:38:38.979785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.919 [2024-11-04 14:38:38.982872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.919 [2024-11-04 14:38:38.983129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:39.919 BaseBdev3 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.919 [2024-11-04 14:38:38.987787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.919 [2024-11-04 14:38:38.990349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.919 [2024-11-04 14:38:38.990616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.919 [2024-11-04 14:38:38.990922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:39.919 [2024-11-04 14:38:38.990975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.919 [2024-11-04 14:38:38.991300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:39.919 [2024-11-04 14:38:38.991540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:39.919 [2024-11-04 14:38:38.991561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:39.919 [2024-11-04 14:38:38.991809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.919 14:38:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.919 14:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.179 14:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.179 "name": "raid_bdev1", 00:12:40.179 "uuid": "8cafb0cb-dac0-4cd5-8293-70cc9ab52bbc", 00:12:40.179 "strip_size_kb": 0, 00:12:40.179 "state": "online", 00:12:40.179 "raid_level": "raid1", 00:12:40.179 "superblock": true, 00:12:40.179 "num_base_bdevs": 3, 00:12:40.179 "num_base_bdevs_discovered": 3, 00:12:40.179 "num_base_bdevs_operational": 3, 00:12:40.179 "base_bdevs_list": [ 00:12:40.179 { 00:12:40.179 "name": "BaseBdev1", 00:12:40.179 "uuid": "e4d0d7fa-b436-559e-b33f-14f9846ba267", 00:12:40.179 "is_configured": true, 00:12:40.179 "data_offset": 2048, 00:12:40.179 "data_size": 63488 00:12:40.179 }, 00:12:40.179 { 00:12:40.179 "name": "BaseBdev2", 00:12:40.179 "uuid": "51f87d90-bf02-55cf-b582-49701b8fafa7", 00:12:40.179 "is_configured": true, 00:12:40.179 "data_offset": 2048, 00:12:40.179 "data_size": 63488 00:12:40.179 }, 00:12:40.179 { 00:12:40.179 "name": "BaseBdev3", 00:12:40.179 "uuid": "735af393-d062-549d-a9ab-0e2f9aa4e546", 00:12:40.179 "is_configured": true, 00:12:40.179 "data_offset": 2048, 00:12:40.179 "data_size": 63488 00:12:40.179 } 00:12:40.179 ] 00:12:40.179 }' 00:12:40.179 14:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.179 14:38:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.438 14:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:40.438 14:38:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:40.695 [2024-11-04 14:38:39.605410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.630 [2024-11-04 14:38:40.510396] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:41.630 [2024-11-04 14:38:40.510649] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.630 [2024-11-04 14:38:40.510947] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.630 "name": "raid_bdev1", 00:12:41.630 "uuid": "8cafb0cb-dac0-4cd5-8293-70cc9ab52bbc", 00:12:41.630 "strip_size_kb": 0, 00:12:41.630 "state": "online", 00:12:41.630 "raid_level": "raid1", 00:12:41.630 "superblock": true, 00:12:41.630 "num_base_bdevs": 3, 00:12:41.630 "num_base_bdevs_discovered": 2, 00:12:41.630 "num_base_bdevs_operational": 2, 00:12:41.630 "base_bdevs_list": [ 00:12:41.630 { 00:12:41.630 "name": null, 00:12:41.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.630 "is_configured": false, 00:12:41.630 "data_offset": 0, 00:12:41.630 "data_size": 63488 00:12:41.630 }, 00:12:41.630 { 00:12:41.630 "name": "BaseBdev2", 00:12:41.630 "uuid": "51f87d90-bf02-55cf-b582-49701b8fafa7", 00:12:41.630 "is_configured": true, 00:12:41.630 "data_offset": 2048, 00:12:41.630 "data_size": 63488 00:12:41.630 }, 00:12:41.630 { 00:12:41.630 "name": "BaseBdev3", 00:12:41.630 "uuid": "735af393-d062-549d-a9ab-0e2f9aa4e546", 00:12:41.630 "is_configured": true, 00:12:41.630 "data_offset": 2048, 00:12:41.630 "data_size": 63488 00:12:41.630 } 00:12:41.630 ] 00:12:41.630 }' 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.630 14:38:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.198 [2024-11-04 14:38:41.043835] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:42.198 [2024-11-04 14:38:41.044101] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.198 [2024-11-04 14:38:41.047568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.198 [2024-11-04 14:38:41.047782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.198 [2024-11-04 14:38:41.048147] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.198 [2024-11-04 14:38:41.048335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:42.198 { 00:12:42.198 "results": [ 00:12:42.198 { 00:12:42.198 "job": "raid_bdev1", 00:12:42.198 "core_mask": "0x1", 00:12:42.198 "workload": "randrw", 00:12:42.198 "percentage": 50, 00:12:42.198 "status": "finished", 00:12:42.198 "queue_depth": 1, 00:12:42.198 "io_size": 131072, 00:12:42.198 "runtime": 1.435872, 00:12:42.198 "iops": 10200.769985068308, 00:12:42.198 "mibps": 1275.0962481335384, 00:12:42.198 "io_failed": 0, 00:12:42.198 "io_timeout": 0, 00:12:42.198 "avg_latency_us": 93.61454942681407, 00:12:42.198 "min_latency_us": 41.42545454545454, 00:12:42.198 "max_latency_us": 1876.7127272727273 00:12:42.198 } 00:12:42.198 ], 00:12:42.198 "core_count": 1 00:12:42.198 } 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69302 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69302 ']' 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69302 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69302 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69302' 00:12:42.198 killing process with pid 69302 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69302 00:12:42.198 [2024-11-04 14:38:41.091266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.198 14:38:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69302 00:12:42.198 [2024-11-04 14:38:41.294677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.574 14:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.jFzUQuX5iz 00:12:43.574 14:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:43.574 14:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:43.574 14:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:43.574 14:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:43.574 14:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:43.574 14:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:43.574 14:38:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:43.574 00:12:43.574 real 0m4.688s 00:12:43.574 user 0m5.815s 00:12:43.574 sys 0m0.560s 00:12:43.574 14:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:43.574 ************************************ 00:12:43.574 END TEST raid_write_error_test 00:12:43.574 14:38:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.574 ************************************ 00:12:43.574 14:38:42 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:43.574 14:38:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:43.574 14:38:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:43.574 14:38:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:43.574 14:38:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:43.574 14:38:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:43.574 ************************************ 00:12:43.574 START TEST raid_state_function_test 00:12:43.574 ************************************ 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69451 00:12:43.574 Process raid pid: 69451 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69451' 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69451 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69451 ']' 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:43.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:43.574 14:38:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.574 [2024-11-04 14:38:42.555657] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:12:43.574 [2024-11-04 14:38:42.555833] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.833 [2024-11-04 14:38:42.746242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.833 [2024-11-04 14:38:42.901288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.092 [2024-11-04 14:38:43.114149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.092 [2024-11-04 14:38:43.114200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.660 [2024-11-04 14:38:43.503392] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.660 [2024-11-04 14:38:43.503474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.660 [2024-11-04 14:38:43.503491] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.660 [2024-11-04 14:38:43.503507] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.660 [2024-11-04 14:38:43.503517] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.660 [2024-11-04 14:38:43.503531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.660 [2024-11-04 14:38:43.503541] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.660 [2024-11-04 14:38:43.503554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.660 "name": "Existed_Raid", 00:12:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.660 "strip_size_kb": 64, 00:12:44.660 "state": "configuring", 00:12:44.660 "raid_level": "raid0", 00:12:44.660 "superblock": false, 00:12:44.660 "num_base_bdevs": 4, 00:12:44.660 "num_base_bdevs_discovered": 0, 00:12:44.660 "num_base_bdevs_operational": 4, 00:12:44.660 "base_bdevs_list": [ 00:12:44.660 { 00:12:44.660 "name": "BaseBdev1", 00:12:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.660 "is_configured": false, 00:12:44.660 "data_offset": 0, 00:12:44.660 "data_size": 0 00:12:44.660 }, 00:12:44.660 { 00:12:44.660 "name": "BaseBdev2", 00:12:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.660 "is_configured": false, 00:12:44.660 "data_offset": 0, 00:12:44.660 "data_size": 0 00:12:44.660 }, 00:12:44.660 { 00:12:44.660 "name": "BaseBdev3", 00:12:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.660 "is_configured": false, 00:12:44.660 "data_offset": 0, 00:12:44.660 "data_size": 0 00:12:44.660 }, 00:12:44.660 { 00:12:44.660 "name": "BaseBdev4", 00:12:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.660 "is_configured": false, 00:12:44.660 "data_offset": 0, 00:12:44.660 "data_size": 0 00:12:44.660 } 00:12:44.660 ] 00:12:44.660 }' 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.660 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.919 14:38:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.919 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.919 14:38:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.919 [2024-11-04 14:38:43.999503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.919 [2024-11-04 14:38:43.999572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:44.919 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.919 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:44.919 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.919 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.919 [2024-11-04 14:38:44.007463] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:44.919 [2024-11-04 14:38:44.007520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:44.919 [2024-11-04 14:38:44.007536] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:44.919 [2024-11-04 14:38:44.007552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:44.919 [2024-11-04 14:38:44.007562] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:44.919 [2024-11-04 14:38:44.007576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:44.919 [2024-11-04 14:38:44.007585] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:44.919 [2024-11-04 14:38:44.007598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:44.919 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.919 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:44.919 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.919 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.178 [2024-11-04 14:38:44.054284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.178 BaseBdev1 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.178 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.178 [ 00:12:45.178 { 00:12:45.178 "name": "BaseBdev1", 00:12:45.178 "aliases": [ 00:12:45.178 "b0095df4-5f47-4d38-95c3-b5dfa1baa3d5" 00:12:45.178 ], 00:12:45.178 "product_name": "Malloc disk", 00:12:45.178 "block_size": 512, 00:12:45.178 "num_blocks": 65536, 00:12:45.178 "uuid": "b0095df4-5f47-4d38-95c3-b5dfa1baa3d5", 00:12:45.178 "assigned_rate_limits": { 00:12:45.178 "rw_ios_per_sec": 0, 00:12:45.178 "rw_mbytes_per_sec": 0, 00:12:45.178 "r_mbytes_per_sec": 0, 00:12:45.178 "w_mbytes_per_sec": 0 00:12:45.178 }, 00:12:45.178 "claimed": true, 00:12:45.178 "claim_type": "exclusive_write", 00:12:45.178 "zoned": false, 00:12:45.178 "supported_io_types": { 00:12:45.178 "read": true, 00:12:45.178 "write": true, 00:12:45.178 "unmap": true, 00:12:45.178 "flush": true, 00:12:45.178 "reset": true, 00:12:45.178 "nvme_admin": false, 00:12:45.178 "nvme_io": false, 00:12:45.178 "nvme_io_md": false, 00:12:45.178 "write_zeroes": true, 00:12:45.178 "zcopy": true, 00:12:45.178 "get_zone_info": false, 00:12:45.178 "zone_management": false, 00:12:45.178 "zone_append": false, 00:12:45.178 "compare": false, 00:12:45.179 "compare_and_write": false, 00:12:45.179 "abort": true, 00:12:45.179 "seek_hole": false, 00:12:45.179 "seek_data": false, 00:12:45.179 "copy": true, 00:12:45.179 "nvme_iov_md": false 00:12:45.179 }, 00:12:45.179 "memory_domains": [ 00:12:45.179 { 00:12:45.179 "dma_device_id": "system", 00:12:45.179 "dma_device_type": 1 00:12:45.179 }, 00:12:45.179 { 00:12:45.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.179 "dma_device_type": 2 00:12:45.179 } 00:12:45.179 ], 00:12:45.179 "driver_specific": {} 00:12:45.179 } 00:12:45.179 ] 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.179 "name": "Existed_Raid", 00:12:45.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.179 "strip_size_kb": 64, 00:12:45.179 "state": "configuring", 00:12:45.179 "raid_level": "raid0", 00:12:45.179 "superblock": false, 00:12:45.179 "num_base_bdevs": 4, 00:12:45.179 "num_base_bdevs_discovered": 1, 00:12:45.179 "num_base_bdevs_operational": 4, 00:12:45.179 "base_bdevs_list": [ 00:12:45.179 { 00:12:45.179 "name": "BaseBdev1", 00:12:45.179 "uuid": "b0095df4-5f47-4d38-95c3-b5dfa1baa3d5", 00:12:45.179 "is_configured": true, 00:12:45.179 "data_offset": 0, 00:12:45.179 "data_size": 65536 00:12:45.179 }, 00:12:45.179 { 00:12:45.179 "name": "BaseBdev2", 00:12:45.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.179 "is_configured": false, 00:12:45.179 "data_offset": 0, 00:12:45.179 "data_size": 0 00:12:45.179 }, 00:12:45.179 { 00:12:45.179 "name": "BaseBdev3", 00:12:45.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.179 "is_configured": false, 00:12:45.179 "data_offset": 0, 00:12:45.179 "data_size": 0 00:12:45.179 }, 00:12:45.179 { 00:12:45.179 "name": "BaseBdev4", 00:12:45.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.179 "is_configured": false, 00:12:45.179 "data_offset": 0, 00:12:45.179 "data_size": 0 00:12:45.179 } 00:12:45.179 ] 00:12:45.179 }' 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.179 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.748 [2024-11-04 14:38:44.582474] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:45.748 [2024-11-04 14:38:44.582542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.748 [2024-11-04 14:38:44.590530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.748 [2024-11-04 14:38:44.592992] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:45.748 [2024-11-04 14:38:44.593037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:45.748 [2024-11-04 14:38:44.593053] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:45.748 [2024-11-04 14:38:44.593070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:45.748 [2024-11-04 14:38:44.593080] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:45.748 [2024-11-04 14:38:44.593094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.748 "name": "Existed_Raid", 00:12:45.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.748 "strip_size_kb": 64, 00:12:45.748 "state": "configuring", 00:12:45.748 "raid_level": "raid0", 00:12:45.748 "superblock": false, 00:12:45.748 "num_base_bdevs": 4, 00:12:45.748 "num_base_bdevs_discovered": 1, 00:12:45.748 "num_base_bdevs_operational": 4, 00:12:45.748 "base_bdevs_list": [ 00:12:45.748 { 00:12:45.748 "name": "BaseBdev1", 00:12:45.748 "uuid": "b0095df4-5f47-4d38-95c3-b5dfa1baa3d5", 00:12:45.748 "is_configured": true, 00:12:45.748 "data_offset": 0, 00:12:45.748 "data_size": 65536 00:12:45.748 }, 00:12:45.748 { 00:12:45.748 "name": "BaseBdev2", 00:12:45.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.748 "is_configured": false, 00:12:45.748 "data_offset": 0, 00:12:45.748 "data_size": 0 00:12:45.748 }, 00:12:45.748 { 00:12:45.748 "name": "BaseBdev3", 00:12:45.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.748 "is_configured": false, 00:12:45.748 "data_offset": 0, 00:12:45.748 "data_size": 0 00:12:45.748 }, 00:12:45.748 { 00:12:45.748 "name": "BaseBdev4", 00:12:45.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.748 "is_configured": false, 00:12:45.748 "data_offset": 0, 00:12:45.748 "data_size": 0 00:12:45.748 } 00:12:45.748 ] 00:12:45.748 }' 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.748 14:38:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.008 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:46.008 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.008 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.267 [2024-11-04 14:38:45.146511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.267 BaseBdev2 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.267 [ 00:12:46.267 { 00:12:46.267 "name": "BaseBdev2", 00:12:46.267 "aliases": [ 00:12:46.267 "44346adb-4e0f-403b-9e35-f87af37478a8" 00:12:46.267 ], 00:12:46.267 "product_name": "Malloc disk", 00:12:46.267 "block_size": 512, 00:12:46.267 "num_blocks": 65536, 00:12:46.267 "uuid": "44346adb-4e0f-403b-9e35-f87af37478a8", 00:12:46.267 "assigned_rate_limits": { 00:12:46.267 "rw_ios_per_sec": 0, 00:12:46.267 "rw_mbytes_per_sec": 0, 00:12:46.267 "r_mbytes_per_sec": 0, 00:12:46.267 "w_mbytes_per_sec": 0 00:12:46.267 }, 00:12:46.267 "claimed": true, 00:12:46.267 "claim_type": "exclusive_write", 00:12:46.267 "zoned": false, 00:12:46.267 "supported_io_types": { 00:12:46.267 "read": true, 00:12:46.267 "write": true, 00:12:46.267 "unmap": true, 00:12:46.267 "flush": true, 00:12:46.267 "reset": true, 00:12:46.267 "nvme_admin": false, 00:12:46.267 "nvme_io": false, 00:12:46.267 "nvme_io_md": false, 00:12:46.267 "write_zeroes": true, 00:12:46.267 "zcopy": true, 00:12:46.267 "get_zone_info": false, 00:12:46.267 "zone_management": false, 00:12:46.267 "zone_append": false, 00:12:46.267 "compare": false, 00:12:46.267 "compare_and_write": false, 00:12:46.267 "abort": true, 00:12:46.267 "seek_hole": false, 00:12:46.267 "seek_data": false, 00:12:46.267 "copy": true, 00:12:46.267 "nvme_iov_md": false 00:12:46.267 }, 00:12:46.267 "memory_domains": [ 00:12:46.267 { 00:12:46.267 "dma_device_id": "system", 00:12:46.267 "dma_device_type": 1 00:12:46.267 }, 00:12:46.267 { 00:12:46.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.267 "dma_device_type": 2 00:12:46.267 } 00:12:46.267 ], 00:12:46.267 "driver_specific": {} 00:12:46.267 } 00:12:46.267 ] 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.267 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.267 "name": "Existed_Raid", 00:12:46.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.267 "strip_size_kb": 64, 00:12:46.267 "state": "configuring", 00:12:46.267 "raid_level": "raid0", 00:12:46.267 "superblock": false, 00:12:46.267 "num_base_bdevs": 4, 00:12:46.268 "num_base_bdevs_discovered": 2, 00:12:46.268 "num_base_bdevs_operational": 4, 00:12:46.268 "base_bdevs_list": [ 00:12:46.268 { 00:12:46.268 "name": "BaseBdev1", 00:12:46.268 "uuid": "b0095df4-5f47-4d38-95c3-b5dfa1baa3d5", 00:12:46.268 "is_configured": true, 00:12:46.268 "data_offset": 0, 00:12:46.268 "data_size": 65536 00:12:46.268 }, 00:12:46.268 { 00:12:46.268 "name": "BaseBdev2", 00:12:46.268 "uuid": "44346adb-4e0f-403b-9e35-f87af37478a8", 00:12:46.268 "is_configured": true, 00:12:46.268 "data_offset": 0, 00:12:46.268 "data_size": 65536 00:12:46.268 }, 00:12:46.268 { 00:12:46.268 "name": "BaseBdev3", 00:12:46.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.268 "is_configured": false, 00:12:46.268 "data_offset": 0, 00:12:46.268 "data_size": 0 00:12:46.268 }, 00:12:46.268 { 00:12:46.268 "name": "BaseBdev4", 00:12:46.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.268 "is_configured": false, 00:12:46.268 "data_offset": 0, 00:12:46.268 "data_size": 0 00:12:46.268 } 00:12:46.268 ] 00:12:46.268 }' 00:12:46.268 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.268 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.836 [2024-11-04 14:38:45.740736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.836 BaseBdev3 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.836 [ 00:12:46.836 { 00:12:46.836 "name": "BaseBdev3", 00:12:46.836 "aliases": [ 00:12:46.836 "56f81885-78e6-4863-ad41-4239c971a454" 00:12:46.836 ], 00:12:46.836 "product_name": "Malloc disk", 00:12:46.836 "block_size": 512, 00:12:46.836 "num_blocks": 65536, 00:12:46.836 "uuid": "56f81885-78e6-4863-ad41-4239c971a454", 00:12:46.836 "assigned_rate_limits": { 00:12:46.836 "rw_ios_per_sec": 0, 00:12:46.836 "rw_mbytes_per_sec": 0, 00:12:46.836 "r_mbytes_per_sec": 0, 00:12:46.836 "w_mbytes_per_sec": 0 00:12:46.836 }, 00:12:46.836 "claimed": true, 00:12:46.836 "claim_type": "exclusive_write", 00:12:46.836 "zoned": false, 00:12:46.836 "supported_io_types": { 00:12:46.836 "read": true, 00:12:46.836 "write": true, 00:12:46.836 "unmap": true, 00:12:46.836 "flush": true, 00:12:46.836 "reset": true, 00:12:46.836 "nvme_admin": false, 00:12:46.836 "nvme_io": false, 00:12:46.836 "nvme_io_md": false, 00:12:46.836 "write_zeroes": true, 00:12:46.836 "zcopy": true, 00:12:46.836 "get_zone_info": false, 00:12:46.836 "zone_management": false, 00:12:46.836 "zone_append": false, 00:12:46.836 "compare": false, 00:12:46.836 "compare_and_write": false, 00:12:46.836 "abort": true, 00:12:46.836 "seek_hole": false, 00:12:46.836 "seek_data": false, 00:12:46.836 "copy": true, 00:12:46.836 "nvme_iov_md": false 00:12:46.836 }, 00:12:46.836 "memory_domains": [ 00:12:46.836 { 00:12:46.836 "dma_device_id": "system", 00:12:46.836 "dma_device_type": 1 00:12:46.836 }, 00:12:46.836 { 00:12:46.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.836 "dma_device_type": 2 00:12:46.836 } 00:12:46.836 ], 00:12:46.836 "driver_specific": {} 00:12:46.836 } 00:12:46.836 ] 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:46.836 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.837 "name": "Existed_Raid", 00:12:46.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.837 "strip_size_kb": 64, 00:12:46.837 "state": "configuring", 00:12:46.837 "raid_level": "raid0", 00:12:46.837 "superblock": false, 00:12:46.837 "num_base_bdevs": 4, 00:12:46.837 "num_base_bdevs_discovered": 3, 00:12:46.837 "num_base_bdevs_operational": 4, 00:12:46.837 "base_bdevs_list": [ 00:12:46.837 { 00:12:46.837 "name": "BaseBdev1", 00:12:46.837 "uuid": "b0095df4-5f47-4d38-95c3-b5dfa1baa3d5", 00:12:46.837 "is_configured": true, 00:12:46.837 "data_offset": 0, 00:12:46.837 "data_size": 65536 00:12:46.837 }, 00:12:46.837 { 00:12:46.837 "name": "BaseBdev2", 00:12:46.837 "uuid": "44346adb-4e0f-403b-9e35-f87af37478a8", 00:12:46.837 "is_configured": true, 00:12:46.837 "data_offset": 0, 00:12:46.837 "data_size": 65536 00:12:46.837 }, 00:12:46.837 { 00:12:46.837 "name": "BaseBdev3", 00:12:46.837 "uuid": "56f81885-78e6-4863-ad41-4239c971a454", 00:12:46.837 "is_configured": true, 00:12:46.837 "data_offset": 0, 00:12:46.837 "data_size": 65536 00:12:46.837 }, 00:12:46.837 { 00:12:46.837 "name": "BaseBdev4", 00:12:46.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.837 "is_configured": false, 00:12:46.837 "data_offset": 0, 00:12:46.837 "data_size": 0 00:12:46.837 } 00:12:46.837 ] 00:12:46.837 }' 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.837 14:38:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.405 [2024-11-04 14:38:46.362267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:47.405 [2024-11-04 14:38:46.362336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:47.405 [2024-11-04 14:38:46.362352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:47.405 [2024-11-04 14:38:46.362720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:47.405 [2024-11-04 14:38:46.362964] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:47.405 [2024-11-04 14:38:46.363008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:47.405 [2024-11-04 14:38:46.363332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.405 BaseBdev4 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.405 [ 00:12:47.405 { 00:12:47.405 "name": "BaseBdev4", 00:12:47.405 "aliases": [ 00:12:47.405 "e81279f4-5bda-4397-a83b-22e95daf825d" 00:12:47.405 ], 00:12:47.405 "product_name": "Malloc disk", 00:12:47.405 "block_size": 512, 00:12:47.405 "num_blocks": 65536, 00:12:47.405 "uuid": "e81279f4-5bda-4397-a83b-22e95daf825d", 00:12:47.405 "assigned_rate_limits": { 00:12:47.405 "rw_ios_per_sec": 0, 00:12:47.405 "rw_mbytes_per_sec": 0, 00:12:47.405 "r_mbytes_per_sec": 0, 00:12:47.405 "w_mbytes_per_sec": 0 00:12:47.405 }, 00:12:47.405 "claimed": true, 00:12:47.405 "claim_type": "exclusive_write", 00:12:47.405 "zoned": false, 00:12:47.405 "supported_io_types": { 00:12:47.405 "read": true, 00:12:47.405 "write": true, 00:12:47.405 "unmap": true, 00:12:47.405 "flush": true, 00:12:47.405 "reset": true, 00:12:47.405 "nvme_admin": false, 00:12:47.405 "nvme_io": false, 00:12:47.405 "nvme_io_md": false, 00:12:47.405 "write_zeroes": true, 00:12:47.405 "zcopy": true, 00:12:47.405 "get_zone_info": false, 00:12:47.405 "zone_management": false, 00:12:47.405 "zone_append": false, 00:12:47.405 "compare": false, 00:12:47.405 "compare_and_write": false, 00:12:47.405 "abort": true, 00:12:47.405 "seek_hole": false, 00:12:47.405 "seek_data": false, 00:12:47.405 "copy": true, 00:12:47.405 "nvme_iov_md": false 00:12:47.405 }, 00:12:47.405 "memory_domains": [ 00:12:47.405 { 00:12:47.405 "dma_device_id": "system", 00:12:47.405 "dma_device_type": 1 00:12:47.405 }, 00:12:47.405 { 00:12:47.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.405 "dma_device_type": 2 00:12:47.405 } 00:12:47.405 ], 00:12:47.405 "driver_specific": {} 00:12:47.405 } 00:12:47.405 ] 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:47.405 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.406 "name": "Existed_Raid", 00:12:47.406 "uuid": "4187c841-b7f5-407c-b813-7522ee09bbcb", 00:12:47.406 "strip_size_kb": 64, 00:12:47.406 "state": "online", 00:12:47.406 "raid_level": "raid0", 00:12:47.406 "superblock": false, 00:12:47.406 "num_base_bdevs": 4, 00:12:47.406 "num_base_bdevs_discovered": 4, 00:12:47.406 "num_base_bdevs_operational": 4, 00:12:47.406 "base_bdevs_list": [ 00:12:47.406 { 00:12:47.406 "name": "BaseBdev1", 00:12:47.406 "uuid": "b0095df4-5f47-4d38-95c3-b5dfa1baa3d5", 00:12:47.406 "is_configured": true, 00:12:47.406 "data_offset": 0, 00:12:47.406 "data_size": 65536 00:12:47.406 }, 00:12:47.406 { 00:12:47.406 "name": "BaseBdev2", 00:12:47.406 "uuid": "44346adb-4e0f-403b-9e35-f87af37478a8", 00:12:47.406 "is_configured": true, 00:12:47.406 "data_offset": 0, 00:12:47.406 "data_size": 65536 00:12:47.406 }, 00:12:47.406 { 00:12:47.406 "name": "BaseBdev3", 00:12:47.406 "uuid": "56f81885-78e6-4863-ad41-4239c971a454", 00:12:47.406 "is_configured": true, 00:12:47.406 "data_offset": 0, 00:12:47.406 "data_size": 65536 00:12:47.406 }, 00:12:47.406 { 00:12:47.406 "name": "BaseBdev4", 00:12:47.406 "uuid": "e81279f4-5bda-4397-a83b-22e95daf825d", 00:12:47.406 "is_configured": true, 00:12:47.406 "data_offset": 0, 00:12:47.406 "data_size": 65536 00:12:47.406 } 00:12:47.406 ] 00:12:47.406 }' 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.406 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.004 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:48.004 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:48.004 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:48.004 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:48.004 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:48.004 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:48.004 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:48.004 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.005 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.005 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:48.005 [2024-11-04 14:38:46.934994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.005 14:38:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.005 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:48.005 "name": "Existed_Raid", 00:12:48.005 "aliases": [ 00:12:48.005 "4187c841-b7f5-407c-b813-7522ee09bbcb" 00:12:48.005 ], 00:12:48.005 "product_name": "Raid Volume", 00:12:48.005 "block_size": 512, 00:12:48.005 "num_blocks": 262144, 00:12:48.005 "uuid": "4187c841-b7f5-407c-b813-7522ee09bbcb", 00:12:48.005 "assigned_rate_limits": { 00:12:48.005 "rw_ios_per_sec": 0, 00:12:48.005 "rw_mbytes_per_sec": 0, 00:12:48.005 "r_mbytes_per_sec": 0, 00:12:48.005 "w_mbytes_per_sec": 0 00:12:48.005 }, 00:12:48.005 "claimed": false, 00:12:48.005 "zoned": false, 00:12:48.005 "supported_io_types": { 00:12:48.005 "read": true, 00:12:48.005 "write": true, 00:12:48.005 "unmap": true, 00:12:48.005 "flush": true, 00:12:48.005 "reset": true, 00:12:48.005 "nvme_admin": false, 00:12:48.005 "nvme_io": false, 00:12:48.005 "nvme_io_md": false, 00:12:48.005 "write_zeroes": true, 00:12:48.005 "zcopy": false, 00:12:48.005 "get_zone_info": false, 00:12:48.005 "zone_management": false, 00:12:48.005 "zone_append": false, 00:12:48.005 "compare": false, 00:12:48.005 "compare_and_write": false, 00:12:48.005 "abort": false, 00:12:48.005 "seek_hole": false, 00:12:48.005 "seek_data": false, 00:12:48.005 "copy": false, 00:12:48.005 "nvme_iov_md": false 00:12:48.005 }, 00:12:48.005 "memory_domains": [ 00:12:48.005 { 00:12:48.005 "dma_device_id": "system", 00:12:48.005 "dma_device_type": 1 00:12:48.005 }, 00:12:48.005 { 00:12:48.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.005 "dma_device_type": 2 00:12:48.005 }, 00:12:48.006 { 00:12:48.006 "dma_device_id": "system", 00:12:48.006 "dma_device_type": 1 00:12:48.006 }, 00:12:48.006 { 00:12:48.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.006 "dma_device_type": 2 00:12:48.006 }, 00:12:48.006 { 00:12:48.006 "dma_device_id": "system", 00:12:48.006 "dma_device_type": 1 00:12:48.006 }, 00:12:48.006 { 00:12:48.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.006 "dma_device_type": 2 00:12:48.006 }, 00:12:48.006 { 00:12:48.006 "dma_device_id": "system", 00:12:48.006 "dma_device_type": 1 00:12:48.006 }, 00:12:48.006 { 00:12:48.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.006 "dma_device_type": 2 00:12:48.006 } 00:12:48.006 ], 00:12:48.006 "driver_specific": { 00:12:48.006 "raid": { 00:12:48.006 "uuid": "4187c841-b7f5-407c-b813-7522ee09bbcb", 00:12:48.006 "strip_size_kb": 64, 00:12:48.006 "state": "online", 00:12:48.006 "raid_level": "raid0", 00:12:48.006 "superblock": false, 00:12:48.006 "num_base_bdevs": 4, 00:12:48.006 "num_base_bdevs_discovered": 4, 00:12:48.006 "num_base_bdevs_operational": 4, 00:12:48.006 "base_bdevs_list": [ 00:12:48.006 { 00:12:48.006 "name": "BaseBdev1", 00:12:48.006 "uuid": "b0095df4-5f47-4d38-95c3-b5dfa1baa3d5", 00:12:48.006 "is_configured": true, 00:12:48.006 "data_offset": 0, 00:12:48.006 "data_size": 65536 00:12:48.006 }, 00:12:48.006 { 00:12:48.006 "name": "BaseBdev2", 00:12:48.006 "uuid": "44346adb-4e0f-403b-9e35-f87af37478a8", 00:12:48.006 "is_configured": true, 00:12:48.006 "data_offset": 0, 00:12:48.006 "data_size": 65536 00:12:48.006 }, 00:12:48.006 { 00:12:48.006 "name": "BaseBdev3", 00:12:48.006 "uuid": "56f81885-78e6-4863-ad41-4239c971a454", 00:12:48.006 "is_configured": true, 00:12:48.006 "data_offset": 0, 00:12:48.006 "data_size": 65536 00:12:48.006 }, 00:12:48.006 { 00:12:48.006 "name": "BaseBdev4", 00:12:48.006 "uuid": "e81279f4-5bda-4397-a83b-22e95daf825d", 00:12:48.006 "is_configured": true, 00:12:48.007 "data_offset": 0, 00:12:48.007 "data_size": 65536 00:12:48.007 } 00:12:48.007 ] 00:12:48.007 } 00:12:48.007 } 00:12:48.007 }' 00:12:48.007 14:38:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.007 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:48.007 BaseBdev2 00:12:48.007 BaseBdev3 00:12:48.007 BaseBdev4' 00:12:48.007 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.007 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:48.007 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.007 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.007 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:48.007 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.007 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.007 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.269 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.269 [2024-11-04 14:38:47.306725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:48.269 [2024-11-04 14:38:47.306901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.269 [2024-11-04 14:38:47.307010] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.527 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.528 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.528 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.528 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.528 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.528 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.528 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.528 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.528 "name": "Existed_Raid", 00:12:48.528 "uuid": "4187c841-b7f5-407c-b813-7522ee09bbcb", 00:12:48.528 "strip_size_kb": 64, 00:12:48.528 "state": "offline", 00:12:48.528 "raid_level": "raid0", 00:12:48.528 "superblock": false, 00:12:48.528 "num_base_bdevs": 4, 00:12:48.528 "num_base_bdevs_discovered": 3, 00:12:48.528 "num_base_bdevs_operational": 3, 00:12:48.528 "base_bdevs_list": [ 00:12:48.528 { 00:12:48.528 "name": null, 00:12:48.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.528 "is_configured": false, 00:12:48.528 "data_offset": 0, 00:12:48.528 "data_size": 65536 00:12:48.528 }, 00:12:48.528 { 00:12:48.528 "name": "BaseBdev2", 00:12:48.528 "uuid": "44346adb-4e0f-403b-9e35-f87af37478a8", 00:12:48.528 "is_configured": true, 00:12:48.528 "data_offset": 0, 00:12:48.528 "data_size": 65536 00:12:48.528 }, 00:12:48.528 { 00:12:48.528 "name": "BaseBdev3", 00:12:48.528 "uuid": "56f81885-78e6-4863-ad41-4239c971a454", 00:12:48.528 "is_configured": true, 00:12:48.528 "data_offset": 0, 00:12:48.528 "data_size": 65536 00:12:48.528 }, 00:12:48.528 { 00:12:48.528 "name": "BaseBdev4", 00:12:48.528 "uuid": "e81279f4-5bda-4397-a83b-22e95daf825d", 00:12:48.528 "is_configured": true, 00:12:48.528 "data_offset": 0, 00:12:48.528 "data_size": 65536 00:12:48.528 } 00:12:48.528 ] 00:12:48.528 }' 00:12:48.528 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.528 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.095 14:38:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.095 [2024-11-04 14:38:47.971704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.095 [2024-11-04 14:38:48.118002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.095 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.353 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.353 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:49.353 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:49.353 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:49.353 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.353 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.353 [2024-11-04 14:38:48.265120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:49.353 [2024-11-04 14:38:48.265184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.354 BaseBdev2 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.354 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.354 [ 00:12:49.354 { 00:12:49.354 "name": "BaseBdev2", 00:12:49.354 "aliases": [ 00:12:49.354 "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f" 00:12:49.354 ], 00:12:49.354 "product_name": "Malloc disk", 00:12:49.354 "block_size": 512, 00:12:49.354 "num_blocks": 65536, 00:12:49.354 "uuid": "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f", 00:12:49.354 "assigned_rate_limits": { 00:12:49.354 "rw_ios_per_sec": 0, 00:12:49.354 "rw_mbytes_per_sec": 0, 00:12:49.354 "r_mbytes_per_sec": 0, 00:12:49.354 "w_mbytes_per_sec": 0 00:12:49.354 }, 00:12:49.354 "claimed": false, 00:12:49.354 "zoned": false, 00:12:49.354 "supported_io_types": { 00:12:49.354 "read": true, 00:12:49.354 "write": true, 00:12:49.354 "unmap": true, 00:12:49.354 "flush": true, 00:12:49.354 "reset": true, 00:12:49.354 "nvme_admin": false, 00:12:49.354 "nvme_io": false, 00:12:49.354 "nvme_io_md": false, 00:12:49.354 "write_zeroes": true, 00:12:49.354 "zcopy": true, 00:12:49.354 "get_zone_info": false, 00:12:49.354 "zone_management": false, 00:12:49.354 "zone_append": false, 00:12:49.354 "compare": false, 00:12:49.354 "compare_and_write": false, 00:12:49.354 "abort": true, 00:12:49.615 "seek_hole": false, 00:12:49.615 "seek_data": false, 00:12:49.615 "copy": true, 00:12:49.615 "nvme_iov_md": false 00:12:49.615 }, 00:12:49.615 "memory_domains": [ 00:12:49.615 { 00:12:49.615 "dma_device_id": "system", 00:12:49.615 "dma_device_type": 1 00:12:49.615 }, 00:12:49.615 { 00:12:49.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.615 "dma_device_type": 2 00:12:49.615 } 00:12:49.615 ], 00:12:49.615 "driver_specific": {} 00:12:49.615 } 00:12:49.615 ] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.615 BaseBdev3 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.615 [ 00:12:49.615 { 00:12:49.615 "name": "BaseBdev3", 00:12:49.615 "aliases": [ 00:12:49.615 "325ccd86-c8b5-4558-8857-dc0a5816d435" 00:12:49.615 ], 00:12:49.615 "product_name": "Malloc disk", 00:12:49.615 "block_size": 512, 00:12:49.615 "num_blocks": 65536, 00:12:49.615 "uuid": "325ccd86-c8b5-4558-8857-dc0a5816d435", 00:12:49.615 "assigned_rate_limits": { 00:12:49.615 "rw_ios_per_sec": 0, 00:12:49.615 "rw_mbytes_per_sec": 0, 00:12:49.615 "r_mbytes_per_sec": 0, 00:12:49.615 "w_mbytes_per_sec": 0 00:12:49.615 }, 00:12:49.615 "claimed": false, 00:12:49.615 "zoned": false, 00:12:49.615 "supported_io_types": { 00:12:49.615 "read": true, 00:12:49.615 "write": true, 00:12:49.615 "unmap": true, 00:12:49.615 "flush": true, 00:12:49.615 "reset": true, 00:12:49.615 "nvme_admin": false, 00:12:49.615 "nvme_io": false, 00:12:49.615 "nvme_io_md": false, 00:12:49.615 "write_zeroes": true, 00:12:49.615 "zcopy": true, 00:12:49.615 "get_zone_info": false, 00:12:49.615 "zone_management": false, 00:12:49.615 "zone_append": false, 00:12:49.615 "compare": false, 00:12:49.615 "compare_and_write": false, 00:12:49.615 "abort": true, 00:12:49.615 "seek_hole": false, 00:12:49.615 "seek_data": false, 00:12:49.615 "copy": true, 00:12:49.615 "nvme_iov_md": false 00:12:49.615 }, 00:12:49.615 "memory_domains": [ 00:12:49.615 { 00:12:49.615 "dma_device_id": "system", 00:12:49.615 "dma_device_type": 1 00:12:49.615 }, 00:12:49.615 { 00:12:49.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.615 "dma_device_type": 2 00:12:49.615 } 00:12:49.615 ], 00:12:49.615 "driver_specific": {} 00:12:49.615 } 00:12:49.615 ] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.615 BaseBdev4 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.615 [ 00:12:49.615 { 00:12:49.615 "name": "BaseBdev4", 00:12:49.615 "aliases": [ 00:12:49.615 "8c822f54-81e3-448a-9ef9-ed636d136c49" 00:12:49.615 ], 00:12:49.615 "product_name": "Malloc disk", 00:12:49.615 "block_size": 512, 00:12:49.615 "num_blocks": 65536, 00:12:49.615 "uuid": "8c822f54-81e3-448a-9ef9-ed636d136c49", 00:12:49.615 "assigned_rate_limits": { 00:12:49.615 "rw_ios_per_sec": 0, 00:12:49.615 "rw_mbytes_per_sec": 0, 00:12:49.615 "r_mbytes_per_sec": 0, 00:12:49.615 "w_mbytes_per_sec": 0 00:12:49.615 }, 00:12:49.615 "claimed": false, 00:12:49.615 "zoned": false, 00:12:49.615 "supported_io_types": { 00:12:49.615 "read": true, 00:12:49.615 "write": true, 00:12:49.615 "unmap": true, 00:12:49.615 "flush": true, 00:12:49.615 "reset": true, 00:12:49.615 "nvme_admin": false, 00:12:49.615 "nvme_io": false, 00:12:49.615 "nvme_io_md": false, 00:12:49.615 "write_zeroes": true, 00:12:49.615 "zcopy": true, 00:12:49.615 "get_zone_info": false, 00:12:49.615 "zone_management": false, 00:12:49.615 "zone_append": false, 00:12:49.615 "compare": false, 00:12:49.615 "compare_and_write": false, 00:12:49.615 "abort": true, 00:12:49.615 "seek_hole": false, 00:12:49.615 "seek_data": false, 00:12:49.615 "copy": true, 00:12:49.615 "nvme_iov_md": false 00:12:49.615 }, 00:12:49.615 "memory_domains": [ 00:12:49.615 { 00:12:49.615 "dma_device_id": "system", 00:12:49.615 "dma_device_type": 1 00:12:49.615 }, 00:12:49.615 { 00:12:49.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.615 "dma_device_type": 2 00:12:49.615 } 00:12:49.615 ], 00:12:49.615 "driver_specific": {} 00:12:49.615 } 00:12:49.615 ] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.615 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.616 [2024-11-04 14:38:48.637049] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.616 [2024-11-04 14:38:48.637253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.616 [2024-11-04 14:38:48.637405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.616 [2024-11-04 14:38:48.640011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.616 [2024-11-04 14:38:48.640240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.616 "name": "Existed_Raid", 00:12:49.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.616 "strip_size_kb": 64, 00:12:49.616 "state": "configuring", 00:12:49.616 "raid_level": "raid0", 00:12:49.616 "superblock": false, 00:12:49.616 "num_base_bdevs": 4, 00:12:49.616 "num_base_bdevs_discovered": 3, 00:12:49.616 "num_base_bdevs_operational": 4, 00:12:49.616 "base_bdevs_list": [ 00:12:49.616 { 00:12:49.616 "name": "BaseBdev1", 00:12:49.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.616 "is_configured": false, 00:12:49.616 "data_offset": 0, 00:12:49.616 "data_size": 0 00:12:49.616 }, 00:12:49.616 { 00:12:49.616 "name": "BaseBdev2", 00:12:49.616 "uuid": "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f", 00:12:49.616 "is_configured": true, 00:12:49.616 "data_offset": 0, 00:12:49.616 "data_size": 65536 00:12:49.616 }, 00:12:49.616 { 00:12:49.616 "name": "BaseBdev3", 00:12:49.616 "uuid": "325ccd86-c8b5-4558-8857-dc0a5816d435", 00:12:49.616 "is_configured": true, 00:12:49.616 "data_offset": 0, 00:12:49.616 "data_size": 65536 00:12:49.616 }, 00:12:49.616 { 00:12:49.616 "name": "BaseBdev4", 00:12:49.616 "uuid": "8c822f54-81e3-448a-9ef9-ed636d136c49", 00:12:49.616 "is_configured": true, 00:12:49.616 "data_offset": 0, 00:12:49.616 "data_size": 65536 00:12:49.616 } 00:12:49.616 ] 00:12:49.616 }' 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.616 14:38:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.182 [2024-11-04 14:38:49.149152] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.182 "name": "Existed_Raid", 00:12:50.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.182 "strip_size_kb": 64, 00:12:50.182 "state": "configuring", 00:12:50.182 "raid_level": "raid0", 00:12:50.182 "superblock": false, 00:12:50.182 "num_base_bdevs": 4, 00:12:50.182 "num_base_bdevs_discovered": 2, 00:12:50.182 "num_base_bdevs_operational": 4, 00:12:50.182 "base_bdevs_list": [ 00:12:50.182 { 00:12:50.182 "name": "BaseBdev1", 00:12:50.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.182 "is_configured": false, 00:12:50.182 "data_offset": 0, 00:12:50.182 "data_size": 0 00:12:50.182 }, 00:12:50.182 { 00:12:50.182 "name": null, 00:12:50.182 "uuid": "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f", 00:12:50.182 "is_configured": false, 00:12:50.182 "data_offset": 0, 00:12:50.182 "data_size": 65536 00:12:50.182 }, 00:12:50.182 { 00:12:50.182 "name": "BaseBdev3", 00:12:50.182 "uuid": "325ccd86-c8b5-4558-8857-dc0a5816d435", 00:12:50.182 "is_configured": true, 00:12:50.182 "data_offset": 0, 00:12:50.182 "data_size": 65536 00:12:50.182 }, 00:12:50.182 { 00:12:50.182 "name": "BaseBdev4", 00:12:50.182 "uuid": "8c822f54-81e3-448a-9ef9-ed636d136c49", 00:12:50.182 "is_configured": true, 00:12:50.182 "data_offset": 0, 00:12:50.182 "data_size": 65536 00:12:50.182 } 00:12:50.182 ] 00:12:50.182 }' 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.182 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.756 [2024-11-04 14:38:49.759988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:50.756 BaseBdev1 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.756 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.757 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:50.757 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.757 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.757 [ 00:12:50.757 { 00:12:50.757 "name": "BaseBdev1", 00:12:50.757 "aliases": [ 00:12:50.757 "612b651b-c93e-46e6-8c8f-ffcd9bfe435a" 00:12:50.757 ], 00:12:50.757 "product_name": "Malloc disk", 00:12:50.757 "block_size": 512, 00:12:50.757 "num_blocks": 65536, 00:12:50.757 "uuid": "612b651b-c93e-46e6-8c8f-ffcd9bfe435a", 00:12:50.757 "assigned_rate_limits": { 00:12:50.757 "rw_ios_per_sec": 0, 00:12:50.757 "rw_mbytes_per_sec": 0, 00:12:50.757 "r_mbytes_per_sec": 0, 00:12:50.757 "w_mbytes_per_sec": 0 00:12:50.757 }, 00:12:50.757 "claimed": true, 00:12:50.757 "claim_type": "exclusive_write", 00:12:50.757 "zoned": false, 00:12:50.757 "supported_io_types": { 00:12:50.757 "read": true, 00:12:50.757 "write": true, 00:12:50.757 "unmap": true, 00:12:50.757 "flush": true, 00:12:50.757 "reset": true, 00:12:50.757 "nvme_admin": false, 00:12:50.757 "nvme_io": false, 00:12:50.757 "nvme_io_md": false, 00:12:50.757 "write_zeroes": true, 00:12:50.757 "zcopy": true, 00:12:50.757 "get_zone_info": false, 00:12:50.757 "zone_management": false, 00:12:50.757 "zone_append": false, 00:12:50.757 "compare": false, 00:12:50.757 "compare_and_write": false, 00:12:50.757 "abort": true, 00:12:50.757 "seek_hole": false, 00:12:50.757 "seek_data": false, 00:12:50.757 "copy": true, 00:12:50.758 "nvme_iov_md": false 00:12:50.758 }, 00:12:50.758 "memory_domains": [ 00:12:50.758 { 00:12:50.758 "dma_device_id": "system", 00:12:50.758 "dma_device_type": 1 00:12:50.758 }, 00:12:50.758 { 00:12:50.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.758 "dma_device_type": 2 00:12:50.758 } 00:12:50.758 ], 00:12:50.758 "driver_specific": {} 00:12:50.758 } 00:12:50.758 ] 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.758 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.759 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.759 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.759 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.759 "name": "Existed_Raid", 00:12:50.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.759 "strip_size_kb": 64, 00:12:50.759 "state": "configuring", 00:12:50.759 "raid_level": "raid0", 00:12:50.759 "superblock": false, 00:12:50.759 "num_base_bdevs": 4, 00:12:50.759 "num_base_bdevs_discovered": 3, 00:12:50.759 "num_base_bdevs_operational": 4, 00:12:50.759 "base_bdevs_list": [ 00:12:50.759 { 00:12:50.759 "name": "BaseBdev1", 00:12:50.759 "uuid": "612b651b-c93e-46e6-8c8f-ffcd9bfe435a", 00:12:50.759 "is_configured": true, 00:12:50.759 "data_offset": 0, 00:12:50.759 "data_size": 65536 00:12:50.759 }, 00:12:50.759 { 00:12:50.759 "name": null, 00:12:50.759 "uuid": "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f", 00:12:50.759 "is_configured": false, 00:12:50.759 "data_offset": 0, 00:12:50.759 "data_size": 65536 00:12:50.759 }, 00:12:50.759 { 00:12:50.759 "name": "BaseBdev3", 00:12:50.759 "uuid": "325ccd86-c8b5-4558-8857-dc0a5816d435", 00:12:50.759 "is_configured": true, 00:12:50.759 "data_offset": 0, 00:12:50.759 "data_size": 65536 00:12:50.759 }, 00:12:50.759 { 00:12:50.759 "name": "BaseBdev4", 00:12:50.759 "uuid": "8c822f54-81e3-448a-9ef9-ed636d136c49", 00:12:50.759 "is_configured": true, 00:12:50.759 "data_offset": 0, 00:12:50.759 "data_size": 65536 00:12:50.759 } 00:12:50.759 ] 00:12:50.759 }' 00:12:50.761 14:38:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.761 14:38:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.332 [2024-11-04 14:38:50.360316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.332 "name": "Existed_Raid", 00:12:51.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.332 "strip_size_kb": 64, 00:12:51.332 "state": "configuring", 00:12:51.332 "raid_level": "raid0", 00:12:51.332 "superblock": false, 00:12:51.332 "num_base_bdevs": 4, 00:12:51.332 "num_base_bdevs_discovered": 2, 00:12:51.332 "num_base_bdevs_operational": 4, 00:12:51.332 "base_bdevs_list": [ 00:12:51.332 { 00:12:51.332 "name": "BaseBdev1", 00:12:51.332 "uuid": "612b651b-c93e-46e6-8c8f-ffcd9bfe435a", 00:12:51.332 "is_configured": true, 00:12:51.332 "data_offset": 0, 00:12:51.332 "data_size": 65536 00:12:51.332 }, 00:12:51.332 { 00:12:51.332 "name": null, 00:12:51.332 "uuid": "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f", 00:12:51.332 "is_configured": false, 00:12:51.332 "data_offset": 0, 00:12:51.332 "data_size": 65536 00:12:51.332 }, 00:12:51.332 { 00:12:51.332 "name": null, 00:12:51.332 "uuid": "325ccd86-c8b5-4558-8857-dc0a5816d435", 00:12:51.332 "is_configured": false, 00:12:51.332 "data_offset": 0, 00:12:51.332 "data_size": 65536 00:12:51.332 }, 00:12:51.332 { 00:12:51.332 "name": "BaseBdev4", 00:12:51.332 "uuid": "8c822f54-81e3-448a-9ef9-ed636d136c49", 00:12:51.332 "is_configured": true, 00:12:51.332 "data_offset": 0, 00:12:51.332 "data_size": 65536 00:12:51.332 } 00:12:51.332 ] 00:12:51.332 }' 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.332 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.899 [2024-11-04 14:38:50.944442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.899 14:38:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.899 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.899 "name": "Existed_Raid", 00:12:51.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.899 "strip_size_kb": 64, 00:12:51.899 "state": "configuring", 00:12:51.899 "raid_level": "raid0", 00:12:51.899 "superblock": false, 00:12:51.899 "num_base_bdevs": 4, 00:12:51.899 "num_base_bdevs_discovered": 3, 00:12:51.899 "num_base_bdevs_operational": 4, 00:12:51.899 "base_bdevs_list": [ 00:12:51.899 { 00:12:51.899 "name": "BaseBdev1", 00:12:51.899 "uuid": "612b651b-c93e-46e6-8c8f-ffcd9bfe435a", 00:12:51.899 "is_configured": true, 00:12:51.899 "data_offset": 0, 00:12:51.899 "data_size": 65536 00:12:51.899 }, 00:12:51.899 { 00:12:51.899 "name": null, 00:12:51.899 "uuid": "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f", 00:12:51.899 "is_configured": false, 00:12:51.899 "data_offset": 0, 00:12:51.899 "data_size": 65536 00:12:51.899 }, 00:12:51.899 { 00:12:51.899 "name": "BaseBdev3", 00:12:51.899 "uuid": "325ccd86-c8b5-4558-8857-dc0a5816d435", 00:12:51.899 "is_configured": true, 00:12:51.899 "data_offset": 0, 00:12:51.899 "data_size": 65536 00:12:51.899 }, 00:12:51.899 { 00:12:51.899 "name": "BaseBdev4", 00:12:51.899 "uuid": "8c822f54-81e3-448a-9ef9-ed636d136c49", 00:12:51.899 "is_configured": true, 00:12:51.899 "data_offset": 0, 00:12:51.899 "data_size": 65536 00:12:51.899 } 00:12:51.899 ] 00:12:51.899 }' 00:12:51.899 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.899 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.464 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:52.464 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.464 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.464 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.464 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.464 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:52.464 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:52.464 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.464 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.464 [2024-11-04 14:38:51.528657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.722 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.723 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.723 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.723 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.723 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.723 "name": "Existed_Raid", 00:12:52.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.723 "strip_size_kb": 64, 00:12:52.723 "state": "configuring", 00:12:52.723 "raid_level": "raid0", 00:12:52.723 "superblock": false, 00:12:52.723 "num_base_bdevs": 4, 00:12:52.723 "num_base_bdevs_discovered": 2, 00:12:52.723 "num_base_bdevs_operational": 4, 00:12:52.723 "base_bdevs_list": [ 00:12:52.723 { 00:12:52.723 "name": null, 00:12:52.723 "uuid": "612b651b-c93e-46e6-8c8f-ffcd9bfe435a", 00:12:52.723 "is_configured": false, 00:12:52.723 "data_offset": 0, 00:12:52.723 "data_size": 65536 00:12:52.723 }, 00:12:52.723 { 00:12:52.723 "name": null, 00:12:52.723 "uuid": "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f", 00:12:52.723 "is_configured": false, 00:12:52.723 "data_offset": 0, 00:12:52.723 "data_size": 65536 00:12:52.723 }, 00:12:52.723 { 00:12:52.723 "name": "BaseBdev3", 00:12:52.723 "uuid": "325ccd86-c8b5-4558-8857-dc0a5816d435", 00:12:52.723 "is_configured": true, 00:12:52.723 "data_offset": 0, 00:12:52.723 "data_size": 65536 00:12:52.723 }, 00:12:52.723 { 00:12:52.723 "name": "BaseBdev4", 00:12:52.723 "uuid": "8c822f54-81e3-448a-9ef9-ed636d136c49", 00:12:52.723 "is_configured": true, 00:12:52.723 "data_offset": 0, 00:12:52.723 "data_size": 65536 00:12:52.723 } 00:12:52.723 ] 00:12:52.723 }' 00:12:52.723 14:38:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.723 14:38:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.290 [2024-11-04 14:38:52.215352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.290 "name": "Existed_Raid", 00:12:53.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.290 "strip_size_kb": 64, 00:12:53.290 "state": "configuring", 00:12:53.290 "raid_level": "raid0", 00:12:53.290 "superblock": false, 00:12:53.290 "num_base_bdevs": 4, 00:12:53.290 "num_base_bdevs_discovered": 3, 00:12:53.290 "num_base_bdevs_operational": 4, 00:12:53.290 "base_bdevs_list": [ 00:12:53.290 { 00:12:53.290 "name": null, 00:12:53.290 "uuid": "612b651b-c93e-46e6-8c8f-ffcd9bfe435a", 00:12:53.290 "is_configured": false, 00:12:53.290 "data_offset": 0, 00:12:53.290 "data_size": 65536 00:12:53.290 }, 00:12:53.290 { 00:12:53.290 "name": "BaseBdev2", 00:12:53.290 "uuid": "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f", 00:12:53.290 "is_configured": true, 00:12:53.290 "data_offset": 0, 00:12:53.290 "data_size": 65536 00:12:53.290 }, 00:12:53.290 { 00:12:53.290 "name": "BaseBdev3", 00:12:53.290 "uuid": "325ccd86-c8b5-4558-8857-dc0a5816d435", 00:12:53.290 "is_configured": true, 00:12:53.290 "data_offset": 0, 00:12:53.290 "data_size": 65536 00:12:53.290 }, 00:12:53.290 { 00:12:53.290 "name": "BaseBdev4", 00:12:53.290 "uuid": "8c822f54-81e3-448a-9ef9-ed636d136c49", 00:12:53.290 "is_configured": true, 00:12:53.290 "data_offset": 0, 00:12:53.290 "data_size": 65536 00:12:53.290 } 00:12:53.290 ] 00:12:53.290 }' 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.290 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 612b651b-c93e-46e6-8c8f-ffcd9bfe435a 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.858 [2024-11-04 14:38:52.881195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:53.858 [2024-11-04 14:38:52.881266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:53.858 [2024-11-04 14:38:52.881279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:53.858 [2024-11-04 14:38:52.881614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:53.858 [2024-11-04 14:38:52.881807] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:53.858 [2024-11-04 14:38:52.881830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:53.858 [2024-11-04 14:38:52.882173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.858 NewBaseBdev 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.858 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.859 [ 00:12:53.859 { 00:12:53.859 "name": "NewBaseBdev", 00:12:53.859 "aliases": [ 00:12:53.859 "612b651b-c93e-46e6-8c8f-ffcd9bfe435a" 00:12:53.859 ], 00:12:53.859 "product_name": "Malloc disk", 00:12:53.859 "block_size": 512, 00:12:53.859 "num_blocks": 65536, 00:12:53.859 "uuid": "612b651b-c93e-46e6-8c8f-ffcd9bfe435a", 00:12:53.859 "assigned_rate_limits": { 00:12:53.859 "rw_ios_per_sec": 0, 00:12:53.859 "rw_mbytes_per_sec": 0, 00:12:53.859 "r_mbytes_per_sec": 0, 00:12:53.859 "w_mbytes_per_sec": 0 00:12:53.859 }, 00:12:53.859 "claimed": true, 00:12:53.859 "claim_type": "exclusive_write", 00:12:53.859 "zoned": false, 00:12:53.859 "supported_io_types": { 00:12:53.859 "read": true, 00:12:53.859 "write": true, 00:12:53.859 "unmap": true, 00:12:53.859 "flush": true, 00:12:53.859 "reset": true, 00:12:53.859 "nvme_admin": false, 00:12:53.859 "nvme_io": false, 00:12:53.859 "nvme_io_md": false, 00:12:53.859 "write_zeroes": true, 00:12:53.859 "zcopy": true, 00:12:53.859 "get_zone_info": false, 00:12:53.859 "zone_management": false, 00:12:53.859 "zone_append": false, 00:12:53.859 "compare": false, 00:12:53.859 "compare_and_write": false, 00:12:53.859 "abort": true, 00:12:53.859 "seek_hole": false, 00:12:53.859 "seek_data": false, 00:12:53.859 "copy": true, 00:12:53.859 "nvme_iov_md": false 00:12:53.859 }, 00:12:53.859 "memory_domains": [ 00:12:53.859 { 00:12:53.859 "dma_device_id": "system", 00:12:53.859 "dma_device_type": 1 00:12:53.859 }, 00:12:53.859 { 00:12:53.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.859 "dma_device_type": 2 00:12:53.859 } 00:12:53.859 ], 00:12:53.859 "driver_specific": {} 00:12:53.859 } 00:12:53.859 ] 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.859 "name": "Existed_Raid", 00:12:53.859 "uuid": "1b47a6ba-c987-4de6-9ac7-373754b50d6d", 00:12:53.859 "strip_size_kb": 64, 00:12:53.859 "state": "online", 00:12:53.859 "raid_level": "raid0", 00:12:53.859 "superblock": false, 00:12:53.859 "num_base_bdevs": 4, 00:12:53.859 "num_base_bdevs_discovered": 4, 00:12:53.859 "num_base_bdevs_operational": 4, 00:12:53.859 "base_bdevs_list": [ 00:12:53.859 { 00:12:53.859 "name": "NewBaseBdev", 00:12:53.859 "uuid": "612b651b-c93e-46e6-8c8f-ffcd9bfe435a", 00:12:53.859 "is_configured": true, 00:12:53.859 "data_offset": 0, 00:12:53.859 "data_size": 65536 00:12:53.859 }, 00:12:53.859 { 00:12:53.859 "name": "BaseBdev2", 00:12:53.859 "uuid": "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f", 00:12:53.859 "is_configured": true, 00:12:53.859 "data_offset": 0, 00:12:53.859 "data_size": 65536 00:12:53.859 }, 00:12:53.859 { 00:12:53.859 "name": "BaseBdev3", 00:12:53.859 "uuid": "325ccd86-c8b5-4558-8857-dc0a5816d435", 00:12:53.859 "is_configured": true, 00:12:53.859 "data_offset": 0, 00:12:53.859 "data_size": 65536 00:12:53.859 }, 00:12:53.859 { 00:12:53.859 "name": "BaseBdev4", 00:12:53.859 "uuid": "8c822f54-81e3-448a-9ef9-ed636d136c49", 00:12:53.859 "is_configured": true, 00:12:53.859 "data_offset": 0, 00:12:53.859 "data_size": 65536 00:12:53.859 } 00:12:53.859 ] 00:12:53.859 }' 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.859 14:38:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.428 [2024-11-04 14:38:53.453842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.428 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.428 "name": "Existed_Raid", 00:12:54.428 "aliases": [ 00:12:54.428 "1b47a6ba-c987-4de6-9ac7-373754b50d6d" 00:12:54.428 ], 00:12:54.428 "product_name": "Raid Volume", 00:12:54.428 "block_size": 512, 00:12:54.428 "num_blocks": 262144, 00:12:54.428 "uuid": "1b47a6ba-c987-4de6-9ac7-373754b50d6d", 00:12:54.428 "assigned_rate_limits": { 00:12:54.428 "rw_ios_per_sec": 0, 00:12:54.428 "rw_mbytes_per_sec": 0, 00:12:54.428 "r_mbytes_per_sec": 0, 00:12:54.428 "w_mbytes_per_sec": 0 00:12:54.428 }, 00:12:54.428 "claimed": false, 00:12:54.428 "zoned": false, 00:12:54.428 "supported_io_types": { 00:12:54.428 "read": true, 00:12:54.428 "write": true, 00:12:54.428 "unmap": true, 00:12:54.428 "flush": true, 00:12:54.428 "reset": true, 00:12:54.428 "nvme_admin": false, 00:12:54.428 "nvme_io": false, 00:12:54.428 "nvme_io_md": false, 00:12:54.428 "write_zeroes": true, 00:12:54.428 "zcopy": false, 00:12:54.428 "get_zone_info": false, 00:12:54.428 "zone_management": false, 00:12:54.428 "zone_append": false, 00:12:54.428 "compare": false, 00:12:54.428 "compare_and_write": false, 00:12:54.428 "abort": false, 00:12:54.428 "seek_hole": false, 00:12:54.428 "seek_data": false, 00:12:54.428 "copy": false, 00:12:54.428 "nvme_iov_md": false 00:12:54.428 }, 00:12:54.428 "memory_domains": [ 00:12:54.428 { 00:12:54.428 "dma_device_id": "system", 00:12:54.428 "dma_device_type": 1 00:12:54.428 }, 00:12:54.428 { 00:12:54.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.428 "dma_device_type": 2 00:12:54.428 }, 00:12:54.428 { 00:12:54.428 "dma_device_id": "system", 00:12:54.428 "dma_device_type": 1 00:12:54.428 }, 00:12:54.428 { 00:12:54.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.428 "dma_device_type": 2 00:12:54.428 }, 00:12:54.428 { 00:12:54.428 "dma_device_id": "system", 00:12:54.428 "dma_device_type": 1 00:12:54.428 }, 00:12:54.428 { 00:12:54.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.428 "dma_device_type": 2 00:12:54.428 }, 00:12:54.428 { 00:12:54.428 "dma_device_id": "system", 00:12:54.428 "dma_device_type": 1 00:12:54.428 }, 00:12:54.428 { 00:12:54.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.428 "dma_device_type": 2 00:12:54.428 } 00:12:54.428 ], 00:12:54.428 "driver_specific": { 00:12:54.428 "raid": { 00:12:54.428 "uuid": "1b47a6ba-c987-4de6-9ac7-373754b50d6d", 00:12:54.428 "strip_size_kb": 64, 00:12:54.428 "state": "online", 00:12:54.428 "raid_level": "raid0", 00:12:54.428 "superblock": false, 00:12:54.428 "num_base_bdevs": 4, 00:12:54.428 "num_base_bdevs_discovered": 4, 00:12:54.428 "num_base_bdevs_operational": 4, 00:12:54.428 "base_bdevs_list": [ 00:12:54.428 { 00:12:54.428 "name": "NewBaseBdev", 00:12:54.428 "uuid": "612b651b-c93e-46e6-8c8f-ffcd9bfe435a", 00:12:54.428 "is_configured": true, 00:12:54.428 "data_offset": 0, 00:12:54.428 "data_size": 65536 00:12:54.428 }, 00:12:54.428 { 00:12:54.428 "name": "BaseBdev2", 00:12:54.428 "uuid": "4c76abf8-dfd1-4c5e-bdc6-e3b97c35500f", 00:12:54.428 "is_configured": true, 00:12:54.428 "data_offset": 0, 00:12:54.428 "data_size": 65536 00:12:54.428 }, 00:12:54.428 { 00:12:54.428 "name": "BaseBdev3", 00:12:54.428 "uuid": "325ccd86-c8b5-4558-8857-dc0a5816d435", 00:12:54.428 "is_configured": true, 00:12:54.428 "data_offset": 0, 00:12:54.428 "data_size": 65536 00:12:54.429 }, 00:12:54.429 { 00:12:54.429 "name": "BaseBdev4", 00:12:54.429 "uuid": "8c822f54-81e3-448a-9ef9-ed636d136c49", 00:12:54.429 "is_configured": true, 00:12:54.429 "data_offset": 0, 00:12:54.429 "data_size": 65536 00:12:54.429 } 00:12:54.429 ] 00:12:54.429 } 00:12:54.429 } 00:12:54.429 }' 00:12:54.429 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.429 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:54.429 BaseBdev2 00:12:54.429 BaseBdev3 00:12:54.429 BaseBdev4' 00:12:54.429 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.687 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:54.687 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.687 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:54.687 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.687 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.687 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.687 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.687 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.687 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.687 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.688 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.946 [2024-11-04 14:38:53.829517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.946 [2024-11-04 14:38:53.829556] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:54.946 [2024-11-04 14:38:53.829660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:54.946 [2024-11-04 14:38:53.829747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:54.946 [2024-11-04 14:38:53.829764] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69451 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69451 ']' 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69451 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69451 00:12:54.946 killing process with pid 69451 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69451' 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69451 00:12:54.946 [2024-11-04 14:38:53.869955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:54.946 14:38:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69451 00:12:55.210 [2024-11-04 14:38:54.228335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.151 14:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:56.151 00:12:56.151 real 0m12.817s 00:12:56.151 user 0m21.292s 00:12:56.151 sys 0m1.772s 00:12:56.151 14:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:56.151 ************************************ 00:12:56.151 END TEST raid_state_function_test 00:12:56.151 ************************************ 00:12:56.151 14:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.410 14:38:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:56.410 14:38:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:56.410 14:38:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:56.410 14:38:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.410 ************************************ 00:12:56.410 START TEST raid_state_function_test_sb 00:12:56.410 ************************************ 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:56.410 Process raid pid: 70134 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70134 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70134' 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70134 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70134 ']' 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:56.410 14:38:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.410 [2024-11-04 14:38:55.429282] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:12:56.410 [2024-11-04 14:38:55.429688] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.669 [2024-11-04 14:38:55.616503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.669 [2024-11-04 14:38:55.749897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.927 [2024-11-04 14:38:55.954790] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.927 [2024-11-04 14:38:55.955108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.494 [2024-11-04 14:38:56.419889] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:57.494 [2024-11-04 14:38:56.419972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:57.494 [2024-11-04 14:38:56.419991] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:57.494 [2024-11-04 14:38:56.420013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:57.494 [2024-11-04 14:38:56.420023] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:57.494 [2024-11-04 14:38:56.420037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:57.494 [2024-11-04 14:38:56.420046] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:57.494 [2024-11-04 14:38:56.420059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.494 "name": "Existed_Raid", 00:12:57.494 "uuid": "16937e2e-0cd0-4538-8971-31014e97d342", 00:12:57.494 "strip_size_kb": 64, 00:12:57.494 "state": "configuring", 00:12:57.494 "raid_level": "raid0", 00:12:57.494 "superblock": true, 00:12:57.494 "num_base_bdevs": 4, 00:12:57.494 "num_base_bdevs_discovered": 0, 00:12:57.494 "num_base_bdevs_operational": 4, 00:12:57.494 "base_bdevs_list": [ 00:12:57.494 { 00:12:57.494 "name": "BaseBdev1", 00:12:57.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.494 "is_configured": false, 00:12:57.494 "data_offset": 0, 00:12:57.494 "data_size": 0 00:12:57.494 }, 00:12:57.494 { 00:12:57.494 "name": "BaseBdev2", 00:12:57.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.494 "is_configured": false, 00:12:57.494 "data_offset": 0, 00:12:57.494 "data_size": 0 00:12:57.494 }, 00:12:57.494 { 00:12:57.494 "name": "BaseBdev3", 00:12:57.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.494 "is_configured": false, 00:12:57.494 "data_offset": 0, 00:12:57.494 "data_size": 0 00:12:57.494 }, 00:12:57.494 { 00:12:57.494 "name": "BaseBdev4", 00:12:57.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.494 "is_configured": false, 00:12:57.494 "data_offset": 0, 00:12:57.494 "data_size": 0 00:12:57.494 } 00:12:57.494 ] 00:12:57.494 }' 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.494 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.062 [2024-11-04 14:38:56.943942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:58.062 [2024-11-04 14:38:56.943989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.062 [2024-11-04 14:38:56.951936] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:58.062 [2024-11-04 14:38:56.951986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:58.062 [2024-11-04 14:38:56.952002] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:58.062 [2024-11-04 14:38:56.952017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:58.062 [2024-11-04 14:38:56.952026] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:58.062 [2024-11-04 14:38:56.952040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:58.062 [2024-11-04 14:38:56.952049] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:58.062 [2024-11-04 14:38:56.952062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.062 [2024-11-04 14:38:56.997092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.062 BaseBdev1 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.062 14:38:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.062 [ 00:12:58.062 { 00:12:58.062 "name": "BaseBdev1", 00:12:58.062 "aliases": [ 00:12:58.062 "217eb066-fdd1-44f9-8383-35f245cac734" 00:12:58.062 ], 00:12:58.062 "product_name": "Malloc disk", 00:12:58.062 "block_size": 512, 00:12:58.062 "num_blocks": 65536, 00:12:58.062 "uuid": "217eb066-fdd1-44f9-8383-35f245cac734", 00:12:58.062 "assigned_rate_limits": { 00:12:58.062 "rw_ios_per_sec": 0, 00:12:58.062 "rw_mbytes_per_sec": 0, 00:12:58.062 "r_mbytes_per_sec": 0, 00:12:58.062 "w_mbytes_per_sec": 0 00:12:58.062 }, 00:12:58.062 "claimed": true, 00:12:58.062 "claim_type": "exclusive_write", 00:12:58.062 "zoned": false, 00:12:58.062 "supported_io_types": { 00:12:58.062 "read": true, 00:12:58.062 "write": true, 00:12:58.062 "unmap": true, 00:12:58.062 "flush": true, 00:12:58.062 "reset": true, 00:12:58.062 "nvme_admin": false, 00:12:58.062 "nvme_io": false, 00:12:58.062 "nvme_io_md": false, 00:12:58.062 "write_zeroes": true, 00:12:58.062 "zcopy": true, 00:12:58.062 "get_zone_info": false, 00:12:58.062 "zone_management": false, 00:12:58.062 "zone_append": false, 00:12:58.062 "compare": false, 00:12:58.062 "compare_and_write": false, 00:12:58.062 "abort": true, 00:12:58.062 "seek_hole": false, 00:12:58.062 "seek_data": false, 00:12:58.062 "copy": true, 00:12:58.062 "nvme_iov_md": false 00:12:58.062 }, 00:12:58.062 "memory_domains": [ 00:12:58.062 { 00:12:58.062 "dma_device_id": "system", 00:12:58.062 "dma_device_type": 1 00:12:58.062 }, 00:12:58.062 { 00:12:58.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.062 "dma_device_type": 2 00:12:58.062 } 00:12:58.062 ], 00:12:58.062 "driver_specific": {} 00:12:58.062 } 00:12:58.062 ] 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.062 "name": "Existed_Raid", 00:12:58.062 "uuid": "695650e9-162e-4c3f-9670-0e06eb246e9a", 00:12:58.062 "strip_size_kb": 64, 00:12:58.062 "state": "configuring", 00:12:58.062 "raid_level": "raid0", 00:12:58.062 "superblock": true, 00:12:58.062 "num_base_bdevs": 4, 00:12:58.062 "num_base_bdevs_discovered": 1, 00:12:58.062 "num_base_bdevs_operational": 4, 00:12:58.062 "base_bdevs_list": [ 00:12:58.062 { 00:12:58.062 "name": "BaseBdev1", 00:12:58.062 "uuid": "217eb066-fdd1-44f9-8383-35f245cac734", 00:12:58.062 "is_configured": true, 00:12:58.062 "data_offset": 2048, 00:12:58.062 "data_size": 63488 00:12:58.062 }, 00:12:58.062 { 00:12:58.062 "name": "BaseBdev2", 00:12:58.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.062 "is_configured": false, 00:12:58.062 "data_offset": 0, 00:12:58.062 "data_size": 0 00:12:58.062 }, 00:12:58.062 { 00:12:58.062 "name": "BaseBdev3", 00:12:58.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.062 "is_configured": false, 00:12:58.062 "data_offset": 0, 00:12:58.062 "data_size": 0 00:12:58.062 }, 00:12:58.062 { 00:12:58.062 "name": "BaseBdev4", 00:12:58.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.062 "is_configured": false, 00:12:58.062 "data_offset": 0, 00:12:58.062 "data_size": 0 00:12:58.062 } 00:12:58.062 ] 00:12:58.062 }' 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.062 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.629 [2024-11-04 14:38:57.517293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:58.629 [2024-11-04 14:38:57.517386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.629 [2024-11-04 14:38:57.529363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.629 [2024-11-04 14:38:57.532054] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:58.629 [2024-11-04 14:38:57.532244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:58.629 [2024-11-04 14:38:57.532373] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:58.629 [2024-11-04 14:38:57.532441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:58.629 [2024-11-04 14:38:57.532546] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:58.629 [2024-11-04 14:38:57.532676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.629 "name": "Existed_Raid", 00:12:58.629 "uuid": "295a6563-9c71-4a0f-9c72-3ee79ab31229", 00:12:58.629 "strip_size_kb": 64, 00:12:58.629 "state": "configuring", 00:12:58.629 "raid_level": "raid0", 00:12:58.629 "superblock": true, 00:12:58.629 "num_base_bdevs": 4, 00:12:58.629 "num_base_bdevs_discovered": 1, 00:12:58.629 "num_base_bdevs_operational": 4, 00:12:58.629 "base_bdevs_list": [ 00:12:58.629 { 00:12:58.629 "name": "BaseBdev1", 00:12:58.629 "uuid": "217eb066-fdd1-44f9-8383-35f245cac734", 00:12:58.629 "is_configured": true, 00:12:58.629 "data_offset": 2048, 00:12:58.629 "data_size": 63488 00:12:58.629 }, 00:12:58.629 { 00:12:58.629 "name": "BaseBdev2", 00:12:58.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.629 "is_configured": false, 00:12:58.629 "data_offset": 0, 00:12:58.629 "data_size": 0 00:12:58.629 }, 00:12:58.629 { 00:12:58.629 "name": "BaseBdev3", 00:12:58.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.629 "is_configured": false, 00:12:58.629 "data_offset": 0, 00:12:58.629 "data_size": 0 00:12:58.629 }, 00:12:58.629 { 00:12:58.629 "name": "BaseBdev4", 00:12:58.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.629 "is_configured": false, 00:12:58.629 "data_offset": 0, 00:12:58.629 "data_size": 0 00:12:58.629 } 00:12:58.629 ] 00:12:58.629 }' 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.629 14:38:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.195 [2024-11-04 14:38:58.085131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.195 BaseBdev2 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.195 [ 00:12:59.195 { 00:12:59.195 "name": "BaseBdev2", 00:12:59.195 "aliases": [ 00:12:59.195 "13b2af98-7cb1-4757-8c31-1ec31f98efb9" 00:12:59.195 ], 00:12:59.195 "product_name": "Malloc disk", 00:12:59.195 "block_size": 512, 00:12:59.195 "num_blocks": 65536, 00:12:59.195 "uuid": "13b2af98-7cb1-4757-8c31-1ec31f98efb9", 00:12:59.195 "assigned_rate_limits": { 00:12:59.195 "rw_ios_per_sec": 0, 00:12:59.195 "rw_mbytes_per_sec": 0, 00:12:59.195 "r_mbytes_per_sec": 0, 00:12:59.195 "w_mbytes_per_sec": 0 00:12:59.195 }, 00:12:59.195 "claimed": true, 00:12:59.195 "claim_type": "exclusive_write", 00:12:59.195 "zoned": false, 00:12:59.195 "supported_io_types": { 00:12:59.195 "read": true, 00:12:59.195 "write": true, 00:12:59.195 "unmap": true, 00:12:59.195 "flush": true, 00:12:59.195 "reset": true, 00:12:59.195 "nvme_admin": false, 00:12:59.195 "nvme_io": false, 00:12:59.195 "nvme_io_md": false, 00:12:59.195 "write_zeroes": true, 00:12:59.195 "zcopy": true, 00:12:59.195 "get_zone_info": false, 00:12:59.195 "zone_management": false, 00:12:59.195 "zone_append": false, 00:12:59.195 "compare": false, 00:12:59.195 "compare_and_write": false, 00:12:59.195 "abort": true, 00:12:59.195 "seek_hole": false, 00:12:59.195 "seek_data": false, 00:12:59.195 "copy": true, 00:12:59.195 "nvme_iov_md": false 00:12:59.195 }, 00:12:59.195 "memory_domains": [ 00:12:59.195 { 00:12:59.195 "dma_device_id": "system", 00:12:59.195 "dma_device_type": 1 00:12:59.195 }, 00:12:59.195 { 00:12:59.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.195 "dma_device_type": 2 00:12:59.195 } 00:12:59.195 ], 00:12:59.195 "driver_specific": {} 00:12:59.195 } 00:12:59.195 ] 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.195 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.196 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.196 "name": "Existed_Raid", 00:12:59.196 "uuid": "295a6563-9c71-4a0f-9c72-3ee79ab31229", 00:12:59.196 "strip_size_kb": 64, 00:12:59.196 "state": "configuring", 00:12:59.196 "raid_level": "raid0", 00:12:59.196 "superblock": true, 00:12:59.196 "num_base_bdevs": 4, 00:12:59.196 "num_base_bdevs_discovered": 2, 00:12:59.196 "num_base_bdevs_operational": 4, 00:12:59.196 "base_bdevs_list": [ 00:12:59.196 { 00:12:59.196 "name": "BaseBdev1", 00:12:59.196 "uuid": "217eb066-fdd1-44f9-8383-35f245cac734", 00:12:59.196 "is_configured": true, 00:12:59.196 "data_offset": 2048, 00:12:59.196 "data_size": 63488 00:12:59.196 }, 00:12:59.196 { 00:12:59.196 "name": "BaseBdev2", 00:12:59.196 "uuid": "13b2af98-7cb1-4757-8c31-1ec31f98efb9", 00:12:59.196 "is_configured": true, 00:12:59.196 "data_offset": 2048, 00:12:59.196 "data_size": 63488 00:12:59.196 }, 00:12:59.196 { 00:12:59.196 "name": "BaseBdev3", 00:12:59.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.196 "is_configured": false, 00:12:59.196 "data_offset": 0, 00:12:59.196 "data_size": 0 00:12:59.196 }, 00:12:59.196 { 00:12:59.196 "name": "BaseBdev4", 00:12:59.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.196 "is_configured": false, 00:12:59.196 "data_offset": 0, 00:12:59.196 "data_size": 0 00:12:59.196 } 00:12:59.196 ] 00:12:59.196 }' 00:12:59.196 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.196 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.762 [2024-11-04 14:38:58.706508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:59.762 BaseBdev3 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:59.762 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.763 [ 00:12:59.763 { 00:12:59.763 "name": "BaseBdev3", 00:12:59.763 "aliases": [ 00:12:59.763 "6675b1b5-5681-49c4-ac25-225add65d7e7" 00:12:59.763 ], 00:12:59.763 "product_name": "Malloc disk", 00:12:59.763 "block_size": 512, 00:12:59.763 "num_blocks": 65536, 00:12:59.763 "uuid": "6675b1b5-5681-49c4-ac25-225add65d7e7", 00:12:59.763 "assigned_rate_limits": { 00:12:59.763 "rw_ios_per_sec": 0, 00:12:59.763 "rw_mbytes_per_sec": 0, 00:12:59.763 "r_mbytes_per_sec": 0, 00:12:59.763 "w_mbytes_per_sec": 0 00:12:59.763 }, 00:12:59.763 "claimed": true, 00:12:59.763 "claim_type": "exclusive_write", 00:12:59.763 "zoned": false, 00:12:59.763 "supported_io_types": { 00:12:59.763 "read": true, 00:12:59.763 "write": true, 00:12:59.763 "unmap": true, 00:12:59.763 "flush": true, 00:12:59.763 "reset": true, 00:12:59.763 "nvme_admin": false, 00:12:59.763 "nvme_io": false, 00:12:59.763 "nvme_io_md": false, 00:12:59.763 "write_zeroes": true, 00:12:59.763 "zcopy": true, 00:12:59.763 "get_zone_info": false, 00:12:59.763 "zone_management": false, 00:12:59.763 "zone_append": false, 00:12:59.763 "compare": false, 00:12:59.763 "compare_and_write": false, 00:12:59.763 "abort": true, 00:12:59.763 "seek_hole": false, 00:12:59.763 "seek_data": false, 00:12:59.763 "copy": true, 00:12:59.763 "nvme_iov_md": false 00:12:59.763 }, 00:12:59.763 "memory_domains": [ 00:12:59.763 { 00:12:59.763 "dma_device_id": "system", 00:12:59.763 "dma_device_type": 1 00:12:59.763 }, 00:12:59.763 { 00:12:59.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.763 "dma_device_type": 2 00:12:59.763 } 00:12:59.763 ], 00:12:59.763 "driver_specific": {} 00:12:59.763 } 00:12:59.763 ] 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.763 "name": "Existed_Raid", 00:12:59.763 "uuid": "295a6563-9c71-4a0f-9c72-3ee79ab31229", 00:12:59.763 "strip_size_kb": 64, 00:12:59.763 "state": "configuring", 00:12:59.763 "raid_level": "raid0", 00:12:59.763 "superblock": true, 00:12:59.763 "num_base_bdevs": 4, 00:12:59.763 "num_base_bdevs_discovered": 3, 00:12:59.763 "num_base_bdevs_operational": 4, 00:12:59.763 "base_bdevs_list": [ 00:12:59.763 { 00:12:59.763 "name": "BaseBdev1", 00:12:59.763 "uuid": "217eb066-fdd1-44f9-8383-35f245cac734", 00:12:59.763 "is_configured": true, 00:12:59.763 "data_offset": 2048, 00:12:59.763 "data_size": 63488 00:12:59.763 }, 00:12:59.763 { 00:12:59.763 "name": "BaseBdev2", 00:12:59.763 "uuid": "13b2af98-7cb1-4757-8c31-1ec31f98efb9", 00:12:59.763 "is_configured": true, 00:12:59.763 "data_offset": 2048, 00:12:59.763 "data_size": 63488 00:12:59.763 }, 00:12:59.763 { 00:12:59.763 "name": "BaseBdev3", 00:12:59.763 "uuid": "6675b1b5-5681-49c4-ac25-225add65d7e7", 00:12:59.763 "is_configured": true, 00:12:59.763 "data_offset": 2048, 00:12:59.763 "data_size": 63488 00:12:59.763 }, 00:12:59.763 { 00:12:59.763 "name": "BaseBdev4", 00:12:59.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.763 "is_configured": false, 00:12:59.763 "data_offset": 0, 00:12:59.763 "data_size": 0 00:12:59.763 } 00:12:59.763 ] 00:12:59.763 }' 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.763 14:38:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.330 [2024-11-04 14:38:59.313139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:00.330 [2024-11-04 14:38:59.313666] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:00.330 [2024-11-04 14:38:59.313694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:00.330 [2024-11-04 14:38:59.314077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:00.330 BaseBdev4 00:13:00.330 [2024-11-04 14:38:59.314279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:00.330 [2024-11-04 14:38:59.314304] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:00.330 [2024-11-04 14:38:59.314480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.330 [ 00:13:00.330 { 00:13:00.330 "name": "BaseBdev4", 00:13:00.330 "aliases": [ 00:13:00.330 "1962f44e-9694-4d43-b3d8-6f460da919b6" 00:13:00.330 ], 00:13:00.330 "product_name": "Malloc disk", 00:13:00.330 "block_size": 512, 00:13:00.330 "num_blocks": 65536, 00:13:00.330 "uuid": "1962f44e-9694-4d43-b3d8-6f460da919b6", 00:13:00.330 "assigned_rate_limits": { 00:13:00.330 "rw_ios_per_sec": 0, 00:13:00.330 "rw_mbytes_per_sec": 0, 00:13:00.330 "r_mbytes_per_sec": 0, 00:13:00.330 "w_mbytes_per_sec": 0 00:13:00.330 }, 00:13:00.330 "claimed": true, 00:13:00.330 "claim_type": "exclusive_write", 00:13:00.330 "zoned": false, 00:13:00.330 "supported_io_types": { 00:13:00.330 "read": true, 00:13:00.330 "write": true, 00:13:00.330 "unmap": true, 00:13:00.330 "flush": true, 00:13:00.330 "reset": true, 00:13:00.330 "nvme_admin": false, 00:13:00.330 "nvme_io": false, 00:13:00.330 "nvme_io_md": false, 00:13:00.330 "write_zeroes": true, 00:13:00.330 "zcopy": true, 00:13:00.330 "get_zone_info": false, 00:13:00.330 "zone_management": false, 00:13:00.330 "zone_append": false, 00:13:00.330 "compare": false, 00:13:00.330 "compare_and_write": false, 00:13:00.330 "abort": true, 00:13:00.330 "seek_hole": false, 00:13:00.330 "seek_data": false, 00:13:00.330 "copy": true, 00:13:00.330 "nvme_iov_md": false 00:13:00.330 }, 00:13:00.330 "memory_domains": [ 00:13:00.330 { 00:13:00.330 "dma_device_id": "system", 00:13:00.330 "dma_device_type": 1 00:13:00.330 }, 00:13:00.330 { 00:13:00.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.330 "dma_device_type": 2 00:13:00.330 } 00:13:00.330 ], 00:13:00.330 "driver_specific": {} 00:13:00.330 } 00:13:00.330 ] 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.330 "name": "Existed_Raid", 00:13:00.330 "uuid": "295a6563-9c71-4a0f-9c72-3ee79ab31229", 00:13:00.330 "strip_size_kb": 64, 00:13:00.330 "state": "online", 00:13:00.330 "raid_level": "raid0", 00:13:00.330 "superblock": true, 00:13:00.330 "num_base_bdevs": 4, 00:13:00.330 "num_base_bdevs_discovered": 4, 00:13:00.330 "num_base_bdevs_operational": 4, 00:13:00.330 "base_bdevs_list": [ 00:13:00.330 { 00:13:00.330 "name": "BaseBdev1", 00:13:00.330 "uuid": "217eb066-fdd1-44f9-8383-35f245cac734", 00:13:00.330 "is_configured": true, 00:13:00.330 "data_offset": 2048, 00:13:00.330 "data_size": 63488 00:13:00.330 }, 00:13:00.330 { 00:13:00.330 "name": "BaseBdev2", 00:13:00.330 "uuid": "13b2af98-7cb1-4757-8c31-1ec31f98efb9", 00:13:00.330 "is_configured": true, 00:13:00.330 "data_offset": 2048, 00:13:00.330 "data_size": 63488 00:13:00.330 }, 00:13:00.330 { 00:13:00.330 "name": "BaseBdev3", 00:13:00.330 "uuid": "6675b1b5-5681-49c4-ac25-225add65d7e7", 00:13:00.330 "is_configured": true, 00:13:00.330 "data_offset": 2048, 00:13:00.330 "data_size": 63488 00:13:00.330 }, 00:13:00.330 { 00:13:00.330 "name": "BaseBdev4", 00:13:00.330 "uuid": "1962f44e-9694-4d43-b3d8-6f460da919b6", 00:13:00.330 "is_configured": true, 00:13:00.330 "data_offset": 2048, 00:13:00.330 "data_size": 63488 00:13:00.330 } 00:13:00.330 ] 00:13:00.330 }' 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.330 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.897 [2024-11-04 14:38:59.873826] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:00.897 "name": "Existed_Raid", 00:13:00.897 "aliases": [ 00:13:00.897 "295a6563-9c71-4a0f-9c72-3ee79ab31229" 00:13:00.897 ], 00:13:00.897 "product_name": "Raid Volume", 00:13:00.897 "block_size": 512, 00:13:00.897 "num_blocks": 253952, 00:13:00.897 "uuid": "295a6563-9c71-4a0f-9c72-3ee79ab31229", 00:13:00.897 "assigned_rate_limits": { 00:13:00.897 "rw_ios_per_sec": 0, 00:13:00.897 "rw_mbytes_per_sec": 0, 00:13:00.897 "r_mbytes_per_sec": 0, 00:13:00.897 "w_mbytes_per_sec": 0 00:13:00.897 }, 00:13:00.897 "claimed": false, 00:13:00.897 "zoned": false, 00:13:00.897 "supported_io_types": { 00:13:00.897 "read": true, 00:13:00.897 "write": true, 00:13:00.897 "unmap": true, 00:13:00.897 "flush": true, 00:13:00.897 "reset": true, 00:13:00.897 "nvme_admin": false, 00:13:00.897 "nvme_io": false, 00:13:00.897 "nvme_io_md": false, 00:13:00.897 "write_zeroes": true, 00:13:00.897 "zcopy": false, 00:13:00.897 "get_zone_info": false, 00:13:00.897 "zone_management": false, 00:13:00.897 "zone_append": false, 00:13:00.897 "compare": false, 00:13:00.897 "compare_and_write": false, 00:13:00.897 "abort": false, 00:13:00.897 "seek_hole": false, 00:13:00.897 "seek_data": false, 00:13:00.897 "copy": false, 00:13:00.897 "nvme_iov_md": false 00:13:00.897 }, 00:13:00.897 "memory_domains": [ 00:13:00.897 { 00:13:00.897 "dma_device_id": "system", 00:13:00.897 "dma_device_type": 1 00:13:00.897 }, 00:13:00.897 { 00:13:00.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.897 "dma_device_type": 2 00:13:00.897 }, 00:13:00.897 { 00:13:00.897 "dma_device_id": "system", 00:13:00.897 "dma_device_type": 1 00:13:00.897 }, 00:13:00.897 { 00:13:00.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.897 "dma_device_type": 2 00:13:00.897 }, 00:13:00.897 { 00:13:00.897 "dma_device_id": "system", 00:13:00.897 "dma_device_type": 1 00:13:00.897 }, 00:13:00.897 { 00:13:00.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.897 "dma_device_type": 2 00:13:00.897 }, 00:13:00.897 { 00:13:00.897 "dma_device_id": "system", 00:13:00.897 "dma_device_type": 1 00:13:00.897 }, 00:13:00.897 { 00:13:00.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.897 "dma_device_type": 2 00:13:00.897 } 00:13:00.897 ], 00:13:00.897 "driver_specific": { 00:13:00.897 "raid": { 00:13:00.897 "uuid": "295a6563-9c71-4a0f-9c72-3ee79ab31229", 00:13:00.897 "strip_size_kb": 64, 00:13:00.897 "state": "online", 00:13:00.897 "raid_level": "raid0", 00:13:00.897 "superblock": true, 00:13:00.897 "num_base_bdevs": 4, 00:13:00.897 "num_base_bdevs_discovered": 4, 00:13:00.897 "num_base_bdevs_operational": 4, 00:13:00.897 "base_bdevs_list": [ 00:13:00.897 { 00:13:00.897 "name": "BaseBdev1", 00:13:00.897 "uuid": "217eb066-fdd1-44f9-8383-35f245cac734", 00:13:00.897 "is_configured": true, 00:13:00.897 "data_offset": 2048, 00:13:00.897 "data_size": 63488 00:13:00.897 }, 00:13:00.897 { 00:13:00.897 "name": "BaseBdev2", 00:13:00.897 "uuid": "13b2af98-7cb1-4757-8c31-1ec31f98efb9", 00:13:00.897 "is_configured": true, 00:13:00.897 "data_offset": 2048, 00:13:00.897 "data_size": 63488 00:13:00.897 }, 00:13:00.897 { 00:13:00.897 "name": "BaseBdev3", 00:13:00.897 "uuid": "6675b1b5-5681-49c4-ac25-225add65d7e7", 00:13:00.897 "is_configured": true, 00:13:00.897 "data_offset": 2048, 00:13:00.897 "data_size": 63488 00:13:00.897 }, 00:13:00.897 { 00:13:00.897 "name": "BaseBdev4", 00:13:00.897 "uuid": "1962f44e-9694-4d43-b3d8-6f460da919b6", 00:13:00.897 "is_configured": true, 00:13:00.897 "data_offset": 2048, 00:13:00.897 "data_size": 63488 00:13:00.897 } 00:13:00.897 ] 00:13:00.897 } 00:13:00.897 } 00:13:00.897 }' 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:00.897 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:00.897 BaseBdev2 00:13:00.898 BaseBdev3 00:13:00.898 BaseBdev4' 00:13:00.898 14:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.156 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.156 [2024-11-04 14:39:00.253561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:01.156 [2024-11-04 14:39:00.253602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.156 [2024-11-04 14:39:00.253667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.415 "name": "Existed_Raid", 00:13:01.415 "uuid": "295a6563-9c71-4a0f-9c72-3ee79ab31229", 00:13:01.415 "strip_size_kb": 64, 00:13:01.415 "state": "offline", 00:13:01.415 "raid_level": "raid0", 00:13:01.415 "superblock": true, 00:13:01.415 "num_base_bdevs": 4, 00:13:01.415 "num_base_bdevs_discovered": 3, 00:13:01.415 "num_base_bdevs_operational": 3, 00:13:01.415 "base_bdevs_list": [ 00:13:01.415 { 00:13:01.415 "name": null, 00:13:01.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.415 "is_configured": false, 00:13:01.415 "data_offset": 0, 00:13:01.415 "data_size": 63488 00:13:01.415 }, 00:13:01.415 { 00:13:01.415 "name": "BaseBdev2", 00:13:01.415 "uuid": "13b2af98-7cb1-4757-8c31-1ec31f98efb9", 00:13:01.415 "is_configured": true, 00:13:01.415 "data_offset": 2048, 00:13:01.415 "data_size": 63488 00:13:01.415 }, 00:13:01.415 { 00:13:01.415 "name": "BaseBdev3", 00:13:01.415 "uuid": "6675b1b5-5681-49c4-ac25-225add65d7e7", 00:13:01.415 "is_configured": true, 00:13:01.415 "data_offset": 2048, 00:13:01.415 "data_size": 63488 00:13:01.415 }, 00:13:01.415 { 00:13:01.415 "name": "BaseBdev4", 00:13:01.415 "uuid": "1962f44e-9694-4d43-b3d8-6f460da919b6", 00:13:01.415 "is_configured": true, 00:13:01.415 "data_offset": 2048, 00:13:01.415 "data_size": 63488 00:13:01.415 } 00:13:01.415 ] 00:13:01.415 }' 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.415 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.982 14:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.982 [2024-11-04 14:39:00.915774] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.982 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.982 [2024-11-04 14:39:01.062064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.241 [2024-11-04 14:39:01.207962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:02.241 [2024-11-04 14:39:01.208025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.241 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.507 BaseBdev2 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.507 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.508 [ 00:13:02.508 { 00:13:02.508 "name": "BaseBdev2", 00:13:02.508 "aliases": [ 00:13:02.508 "1cf56432-cd07-4fd8-9533-01e4f9071619" 00:13:02.508 ], 00:13:02.508 "product_name": "Malloc disk", 00:13:02.508 "block_size": 512, 00:13:02.508 "num_blocks": 65536, 00:13:02.508 "uuid": "1cf56432-cd07-4fd8-9533-01e4f9071619", 00:13:02.508 "assigned_rate_limits": { 00:13:02.508 "rw_ios_per_sec": 0, 00:13:02.508 "rw_mbytes_per_sec": 0, 00:13:02.508 "r_mbytes_per_sec": 0, 00:13:02.508 "w_mbytes_per_sec": 0 00:13:02.508 }, 00:13:02.508 "claimed": false, 00:13:02.508 "zoned": false, 00:13:02.508 "supported_io_types": { 00:13:02.508 "read": true, 00:13:02.508 "write": true, 00:13:02.508 "unmap": true, 00:13:02.508 "flush": true, 00:13:02.508 "reset": true, 00:13:02.508 "nvme_admin": false, 00:13:02.508 "nvme_io": false, 00:13:02.508 "nvme_io_md": false, 00:13:02.508 "write_zeroes": true, 00:13:02.508 "zcopy": true, 00:13:02.508 "get_zone_info": false, 00:13:02.508 "zone_management": false, 00:13:02.508 "zone_append": false, 00:13:02.508 "compare": false, 00:13:02.508 "compare_and_write": false, 00:13:02.508 "abort": true, 00:13:02.508 "seek_hole": false, 00:13:02.508 "seek_data": false, 00:13:02.508 "copy": true, 00:13:02.508 "nvme_iov_md": false 00:13:02.508 }, 00:13:02.508 "memory_domains": [ 00:13:02.508 { 00:13:02.508 "dma_device_id": "system", 00:13:02.508 "dma_device_type": 1 00:13:02.508 }, 00:13:02.508 { 00:13:02.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.508 "dma_device_type": 2 00:13:02.508 } 00:13:02.508 ], 00:13:02.508 "driver_specific": {} 00:13:02.508 } 00:13:02.508 ] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.508 BaseBdev3 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.508 [ 00:13:02.508 { 00:13:02.508 "name": "BaseBdev3", 00:13:02.508 "aliases": [ 00:13:02.508 "fb1032df-680b-4ce7-8a0e-ce1326665d41" 00:13:02.508 ], 00:13:02.508 "product_name": "Malloc disk", 00:13:02.508 "block_size": 512, 00:13:02.508 "num_blocks": 65536, 00:13:02.508 "uuid": "fb1032df-680b-4ce7-8a0e-ce1326665d41", 00:13:02.508 "assigned_rate_limits": { 00:13:02.508 "rw_ios_per_sec": 0, 00:13:02.508 "rw_mbytes_per_sec": 0, 00:13:02.508 "r_mbytes_per_sec": 0, 00:13:02.508 "w_mbytes_per_sec": 0 00:13:02.508 }, 00:13:02.508 "claimed": false, 00:13:02.508 "zoned": false, 00:13:02.508 "supported_io_types": { 00:13:02.508 "read": true, 00:13:02.508 "write": true, 00:13:02.508 "unmap": true, 00:13:02.508 "flush": true, 00:13:02.508 "reset": true, 00:13:02.508 "nvme_admin": false, 00:13:02.508 "nvme_io": false, 00:13:02.508 "nvme_io_md": false, 00:13:02.508 "write_zeroes": true, 00:13:02.508 "zcopy": true, 00:13:02.508 "get_zone_info": false, 00:13:02.508 "zone_management": false, 00:13:02.508 "zone_append": false, 00:13:02.508 "compare": false, 00:13:02.508 "compare_and_write": false, 00:13:02.508 "abort": true, 00:13:02.508 "seek_hole": false, 00:13:02.508 "seek_data": false, 00:13:02.508 "copy": true, 00:13:02.508 "nvme_iov_md": false 00:13:02.508 }, 00:13:02.508 "memory_domains": [ 00:13:02.508 { 00:13:02.508 "dma_device_id": "system", 00:13:02.508 "dma_device_type": 1 00:13:02.508 }, 00:13:02.508 { 00:13:02.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.508 "dma_device_type": 2 00:13:02.508 } 00:13:02.508 ], 00:13:02.508 "driver_specific": {} 00:13:02.508 } 00:13:02.508 ] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.508 BaseBdev4 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.508 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.508 [ 00:13:02.508 { 00:13:02.508 "name": "BaseBdev4", 00:13:02.508 "aliases": [ 00:13:02.508 "861f387a-4cce-4133-8544-1dbb08dce12f" 00:13:02.508 ], 00:13:02.508 "product_name": "Malloc disk", 00:13:02.508 "block_size": 512, 00:13:02.508 "num_blocks": 65536, 00:13:02.508 "uuid": "861f387a-4cce-4133-8544-1dbb08dce12f", 00:13:02.508 "assigned_rate_limits": { 00:13:02.508 "rw_ios_per_sec": 0, 00:13:02.508 "rw_mbytes_per_sec": 0, 00:13:02.508 "r_mbytes_per_sec": 0, 00:13:02.508 "w_mbytes_per_sec": 0 00:13:02.508 }, 00:13:02.508 "claimed": false, 00:13:02.508 "zoned": false, 00:13:02.508 "supported_io_types": { 00:13:02.508 "read": true, 00:13:02.508 "write": true, 00:13:02.508 "unmap": true, 00:13:02.508 "flush": true, 00:13:02.508 "reset": true, 00:13:02.509 "nvme_admin": false, 00:13:02.509 "nvme_io": false, 00:13:02.509 "nvme_io_md": false, 00:13:02.509 "write_zeroes": true, 00:13:02.509 "zcopy": true, 00:13:02.509 "get_zone_info": false, 00:13:02.509 "zone_management": false, 00:13:02.509 "zone_append": false, 00:13:02.509 "compare": false, 00:13:02.509 "compare_and_write": false, 00:13:02.509 "abort": true, 00:13:02.509 "seek_hole": false, 00:13:02.509 "seek_data": false, 00:13:02.509 "copy": true, 00:13:02.509 "nvme_iov_md": false 00:13:02.509 }, 00:13:02.509 "memory_domains": [ 00:13:02.509 { 00:13:02.509 "dma_device_id": "system", 00:13:02.509 "dma_device_type": 1 00:13:02.509 }, 00:13:02.509 { 00:13:02.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.509 "dma_device_type": 2 00:13:02.509 } 00:13:02.509 ], 00:13:02.509 "driver_specific": {} 00:13:02.509 } 00:13:02.509 ] 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.509 [2024-11-04 14:39:01.599853] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:02.509 [2024-11-04 14:39:01.599915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:02.509 [2024-11-04 14:39:01.599972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.509 [2024-11-04 14:39:01.602516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.509 [2024-11-04 14:39:01.602743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.509 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.802 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.802 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.802 "name": "Existed_Raid", 00:13:02.802 "uuid": "100a97ff-ce42-4974-9743-8ece53c7ce9f", 00:13:02.802 "strip_size_kb": 64, 00:13:02.802 "state": "configuring", 00:13:02.802 "raid_level": "raid0", 00:13:02.802 "superblock": true, 00:13:02.802 "num_base_bdevs": 4, 00:13:02.802 "num_base_bdevs_discovered": 3, 00:13:02.802 "num_base_bdevs_operational": 4, 00:13:02.802 "base_bdevs_list": [ 00:13:02.802 { 00:13:02.802 "name": "BaseBdev1", 00:13:02.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.802 "is_configured": false, 00:13:02.802 "data_offset": 0, 00:13:02.802 "data_size": 0 00:13:02.802 }, 00:13:02.802 { 00:13:02.802 "name": "BaseBdev2", 00:13:02.802 "uuid": "1cf56432-cd07-4fd8-9533-01e4f9071619", 00:13:02.802 "is_configured": true, 00:13:02.802 "data_offset": 2048, 00:13:02.802 "data_size": 63488 00:13:02.802 }, 00:13:02.802 { 00:13:02.802 "name": "BaseBdev3", 00:13:02.802 "uuid": "fb1032df-680b-4ce7-8a0e-ce1326665d41", 00:13:02.802 "is_configured": true, 00:13:02.802 "data_offset": 2048, 00:13:02.802 "data_size": 63488 00:13:02.802 }, 00:13:02.802 { 00:13:02.802 "name": "BaseBdev4", 00:13:02.802 "uuid": "861f387a-4cce-4133-8544-1dbb08dce12f", 00:13:02.802 "is_configured": true, 00:13:02.802 "data_offset": 2048, 00:13:02.802 "data_size": 63488 00:13:02.802 } 00:13:02.802 ] 00:13:02.802 }' 00:13:02.802 14:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.802 14:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.061 [2024-11-04 14:39:02.127980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.061 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.319 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.319 "name": "Existed_Raid", 00:13:03.319 "uuid": "100a97ff-ce42-4974-9743-8ece53c7ce9f", 00:13:03.319 "strip_size_kb": 64, 00:13:03.319 "state": "configuring", 00:13:03.319 "raid_level": "raid0", 00:13:03.319 "superblock": true, 00:13:03.319 "num_base_bdevs": 4, 00:13:03.319 "num_base_bdevs_discovered": 2, 00:13:03.319 "num_base_bdevs_operational": 4, 00:13:03.319 "base_bdevs_list": [ 00:13:03.319 { 00:13:03.319 "name": "BaseBdev1", 00:13:03.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.319 "is_configured": false, 00:13:03.319 "data_offset": 0, 00:13:03.319 "data_size": 0 00:13:03.319 }, 00:13:03.319 { 00:13:03.319 "name": null, 00:13:03.319 "uuid": "1cf56432-cd07-4fd8-9533-01e4f9071619", 00:13:03.319 "is_configured": false, 00:13:03.319 "data_offset": 0, 00:13:03.319 "data_size": 63488 00:13:03.319 }, 00:13:03.319 { 00:13:03.319 "name": "BaseBdev3", 00:13:03.319 "uuid": "fb1032df-680b-4ce7-8a0e-ce1326665d41", 00:13:03.319 "is_configured": true, 00:13:03.319 "data_offset": 2048, 00:13:03.319 "data_size": 63488 00:13:03.319 }, 00:13:03.319 { 00:13:03.319 "name": "BaseBdev4", 00:13:03.319 "uuid": "861f387a-4cce-4133-8544-1dbb08dce12f", 00:13:03.319 "is_configured": true, 00:13:03.319 "data_offset": 2048, 00:13:03.319 "data_size": 63488 00:13:03.319 } 00:13:03.319 ] 00:13:03.319 }' 00:13:03.319 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.319 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.578 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.578 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.578 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.578 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:03.578 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.578 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:03.578 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:03.578 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.578 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.837 [2024-11-04 14:39:02.730205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.837 BaseBdev1 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.837 [ 00:13:03.837 { 00:13:03.837 "name": "BaseBdev1", 00:13:03.837 "aliases": [ 00:13:03.837 "40218ff1-73d4-4470-9a5f-63b78c3783c7" 00:13:03.837 ], 00:13:03.837 "product_name": "Malloc disk", 00:13:03.837 "block_size": 512, 00:13:03.837 "num_blocks": 65536, 00:13:03.837 "uuid": "40218ff1-73d4-4470-9a5f-63b78c3783c7", 00:13:03.837 "assigned_rate_limits": { 00:13:03.837 "rw_ios_per_sec": 0, 00:13:03.837 "rw_mbytes_per_sec": 0, 00:13:03.837 "r_mbytes_per_sec": 0, 00:13:03.837 "w_mbytes_per_sec": 0 00:13:03.837 }, 00:13:03.837 "claimed": true, 00:13:03.837 "claim_type": "exclusive_write", 00:13:03.837 "zoned": false, 00:13:03.837 "supported_io_types": { 00:13:03.837 "read": true, 00:13:03.837 "write": true, 00:13:03.837 "unmap": true, 00:13:03.837 "flush": true, 00:13:03.837 "reset": true, 00:13:03.837 "nvme_admin": false, 00:13:03.837 "nvme_io": false, 00:13:03.837 "nvme_io_md": false, 00:13:03.837 "write_zeroes": true, 00:13:03.837 "zcopy": true, 00:13:03.837 "get_zone_info": false, 00:13:03.837 "zone_management": false, 00:13:03.837 "zone_append": false, 00:13:03.837 "compare": false, 00:13:03.837 "compare_and_write": false, 00:13:03.837 "abort": true, 00:13:03.837 "seek_hole": false, 00:13:03.837 "seek_data": false, 00:13:03.837 "copy": true, 00:13:03.837 "nvme_iov_md": false 00:13:03.837 }, 00:13:03.837 "memory_domains": [ 00:13:03.837 { 00:13:03.837 "dma_device_id": "system", 00:13:03.837 "dma_device_type": 1 00:13:03.837 }, 00:13:03.837 { 00:13:03.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.837 "dma_device_type": 2 00:13:03.837 } 00:13:03.837 ], 00:13:03.837 "driver_specific": {} 00:13:03.837 } 00:13:03.837 ] 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.837 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.837 "name": "Existed_Raid", 00:13:03.837 "uuid": "100a97ff-ce42-4974-9743-8ece53c7ce9f", 00:13:03.837 "strip_size_kb": 64, 00:13:03.837 "state": "configuring", 00:13:03.837 "raid_level": "raid0", 00:13:03.838 "superblock": true, 00:13:03.838 "num_base_bdevs": 4, 00:13:03.838 "num_base_bdevs_discovered": 3, 00:13:03.838 "num_base_bdevs_operational": 4, 00:13:03.838 "base_bdevs_list": [ 00:13:03.838 { 00:13:03.838 "name": "BaseBdev1", 00:13:03.838 "uuid": "40218ff1-73d4-4470-9a5f-63b78c3783c7", 00:13:03.838 "is_configured": true, 00:13:03.838 "data_offset": 2048, 00:13:03.838 "data_size": 63488 00:13:03.838 }, 00:13:03.838 { 00:13:03.838 "name": null, 00:13:03.838 "uuid": "1cf56432-cd07-4fd8-9533-01e4f9071619", 00:13:03.838 "is_configured": false, 00:13:03.838 "data_offset": 0, 00:13:03.838 "data_size": 63488 00:13:03.838 }, 00:13:03.838 { 00:13:03.838 "name": "BaseBdev3", 00:13:03.838 "uuid": "fb1032df-680b-4ce7-8a0e-ce1326665d41", 00:13:03.838 "is_configured": true, 00:13:03.838 "data_offset": 2048, 00:13:03.838 "data_size": 63488 00:13:03.838 }, 00:13:03.838 { 00:13:03.838 "name": "BaseBdev4", 00:13:03.838 "uuid": "861f387a-4cce-4133-8544-1dbb08dce12f", 00:13:03.838 "is_configured": true, 00:13:03.838 "data_offset": 2048, 00:13:03.838 "data_size": 63488 00:13:03.838 } 00:13:03.838 ] 00:13:03.838 }' 00:13:03.838 14:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.838 14:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.406 [2024-11-04 14:39:03.342467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.406 "name": "Existed_Raid", 00:13:04.406 "uuid": "100a97ff-ce42-4974-9743-8ece53c7ce9f", 00:13:04.406 "strip_size_kb": 64, 00:13:04.406 "state": "configuring", 00:13:04.406 "raid_level": "raid0", 00:13:04.406 "superblock": true, 00:13:04.406 "num_base_bdevs": 4, 00:13:04.406 "num_base_bdevs_discovered": 2, 00:13:04.406 "num_base_bdevs_operational": 4, 00:13:04.406 "base_bdevs_list": [ 00:13:04.406 { 00:13:04.406 "name": "BaseBdev1", 00:13:04.406 "uuid": "40218ff1-73d4-4470-9a5f-63b78c3783c7", 00:13:04.406 "is_configured": true, 00:13:04.406 "data_offset": 2048, 00:13:04.406 "data_size": 63488 00:13:04.406 }, 00:13:04.406 { 00:13:04.406 "name": null, 00:13:04.406 "uuid": "1cf56432-cd07-4fd8-9533-01e4f9071619", 00:13:04.406 "is_configured": false, 00:13:04.406 "data_offset": 0, 00:13:04.406 "data_size": 63488 00:13:04.406 }, 00:13:04.406 { 00:13:04.406 "name": null, 00:13:04.406 "uuid": "fb1032df-680b-4ce7-8a0e-ce1326665d41", 00:13:04.406 "is_configured": false, 00:13:04.406 "data_offset": 0, 00:13:04.406 "data_size": 63488 00:13:04.406 }, 00:13:04.406 { 00:13:04.406 "name": "BaseBdev4", 00:13:04.406 "uuid": "861f387a-4cce-4133-8544-1dbb08dce12f", 00:13:04.406 "is_configured": true, 00:13:04.406 "data_offset": 2048, 00:13:04.406 "data_size": 63488 00:13:04.406 } 00:13:04.406 ] 00:13:04.406 }' 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.406 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.984 [2024-11-04 14:39:03.918680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.984 "name": "Existed_Raid", 00:13:04.984 "uuid": "100a97ff-ce42-4974-9743-8ece53c7ce9f", 00:13:04.984 "strip_size_kb": 64, 00:13:04.984 "state": "configuring", 00:13:04.984 "raid_level": "raid0", 00:13:04.984 "superblock": true, 00:13:04.984 "num_base_bdevs": 4, 00:13:04.984 "num_base_bdevs_discovered": 3, 00:13:04.984 "num_base_bdevs_operational": 4, 00:13:04.984 "base_bdevs_list": [ 00:13:04.984 { 00:13:04.984 "name": "BaseBdev1", 00:13:04.984 "uuid": "40218ff1-73d4-4470-9a5f-63b78c3783c7", 00:13:04.984 "is_configured": true, 00:13:04.984 "data_offset": 2048, 00:13:04.984 "data_size": 63488 00:13:04.984 }, 00:13:04.984 { 00:13:04.984 "name": null, 00:13:04.984 "uuid": "1cf56432-cd07-4fd8-9533-01e4f9071619", 00:13:04.984 "is_configured": false, 00:13:04.984 "data_offset": 0, 00:13:04.984 "data_size": 63488 00:13:04.984 }, 00:13:04.984 { 00:13:04.984 "name": "BaseBdev3", 00:13:04.984 "uuid": "fb1032df-680b-4ce7-8a0e-ce1326665d41", 00:13:04.984 "is_configured": true, 00:13:04.984 "data_offset": 2048, 00:13:04.984 "data_size": 63488 00:13:04.984 }, 00:13:04.984 { 00:13:04.984 "name": "BaseBdev4", 00:13:04.984 "uuid": "861f387a-4cce-4133-8544-1dbb08dce12f", 00:13:04.984 "is_configured": true, 00:13:04.984 "data_offset": 2048, 00:13:04.984 "data_size": 63488 00:13:04.984 } 00:13:04.984 ] 00:13:04.984 }' 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.984 14:39:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.551 [2024-11-04 14:39:04.482905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.551 "name": "Existed_Raid", 00:13:05.551 "uuid": "100a97ff-ce42-4974-9743-8ece53c7ce9f", 00:13:05.551 "strip_size_kb": 64, 00:13:05.551 "state": "configuring", 00:13:05.551 "raid_level": "raid0", 00:13:05.551 "superblock": true, 00:13:05.551 "num_base_bdevs": 4, 00:13:05.551 "num_base_bdevs_discovered": 2, 00:13:05.551 "num_base_bdevs_operational": 4, 00:13:05.551 "base_bdevs_list": [ 00:13:05.551 { 00:13:05.551 "name": null, 00:13:05.551 "uuid": "40218ff1-73d4-4470-9a5f-63b78c3783c7", 00:13:05.551 "is_configured": false, 00:13:05.551 "data_offset": 0, 00:13:05.551 "data_size": 63488 00:13:05.551 }, 00:13:05.551 { 00:13:05.551 "name": null, 00:13:05.551 "uuid": "1cf56432-cd07-4fd8-9533-01e4f9071619", 00:13:05.551 "is_configured": false, 00:13:05.551 "data_offset": 0, 00:13:05.551 "data_size": 63488 00:13:05.551 }, 00:13:05.551 { 00:13:05.551 "name": "BaseBdev3", 00:13:05.551 "uuid": "fb1032df-680b-4ce7-8a0e-ce1326665d41", 00:13:05.551 "is_configured": true, 00:13:05.551 "data_offset": 2048, 00:13:05.551 "data_size": 63488 00:13:05.551 }, 00:13:05.551 { 00:13:05.551 "name": "BaseBdev4", 00:13:05.551 "uuid": "861f387a-4cce-4133-8544-1dbb08dce12f", 00:13:05.551 "is_configured": true, 00:13:05.551 "data_offset": 2048, 00:13:05.551 "data_size": 63488 00:13:05.551 } 00:13:05.551 ] 00:13:05.551 }' 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.551 14:39:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.138 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.138 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.138 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.138 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.139 [2024-11-04 14:39:05.145727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.139 "name": "Existed_Raid", 00:13:06.139 "uuid": "100a97ff-ce42-4974-9743-8ece53c7ce9f", 00:13:06.139 "strip_size_kb": 64, 00:13:06.139 "state": "configuring", 00:13:06.139 "raid_level": "raid0", 00:13:06.139 "superblock": true, 00:13:06.139 "num_base_bdevs": 4, 00:13:06.139 "num_base_bdevs_discovered": 3, 00:13:06.139 "num_base_bdevs_operational": 4, 00:13:06.139 "base_bdevs_list": [ 00:13:06.139 { 00:13:06.139 "name": null, 00:13:06.139 "uuid": "40218ff1-73d4-4470-9a5f-63b78c3783c7", 00:13:06.139 "is_configured": false, 00:13:06.139 "data_offset": 0, 00:13:06.139 "data_size": 63488 00:13:06.139 }, 00:13:06.139 { 00:13:06.139 "name": "BaseBdev2", 00:13:06.139 "uuid": "1cf56432-cd07-4fd8-9533-01e4f9071619", 00:13:06.139 "is_configured": true, 00:13:06.139 "data_offset": 2048, 00:13:06.139 "data_size": 63488 00:13:06.139 }, 00:13:06.139 { 00:13:06.139 "name": "BaseBdev3", 00:13:06.139 "uuid": "fb1032df-680b-4ce7-8a0e-ce1326665d41", 00:13:06.139 "is_configured": true, 00:13:06.139 "data_offset": 2048, 00:13:06.139 "data_size": 63488 00:13:06.139 }, 00:13:06.139 { 00:13:06.139 "name": "BaseBdev4", 00:13:06.139 "uuid": "861f387a-4cce-4133-8544-1dbb08dce12f", 00:13:06.139 "is_configured": true, 00:13:06.139 "data_offset": 2048, 00:13:06.139 "data_size": 63488 00:13:06.139 } 00:13:06.139 ] 00:13:06.139 }' 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.139 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.706 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 40218ff1-73d4-4470-9a5f-63b78c3783c7 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.965 [2024-11-04 14:39:05.868511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:06.965 [2024-11-04 14:39:05.868807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:06.965 [2024-11-04 14:39:05.868826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:06.965 NewBaseBdev 00:13:06.965 [2024-11-04 14:39:05.869183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:06.965 [2024-11-04 14:39:05.869368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:06.965 [2024-11-04 14:39:05.869391] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:06.965 [2024-11-04 14:39:05.869545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.965 [ 00:13:06.965 { 00:13:06.965 "name": "NewBaseBdev", 00:13:06.965 "aliases": [ 00:13:06.965 "40218ff1-73d4-4470-9a5f-63b78c3783c7" 00:13:06.965 ], 00:13:06.965 "product_name": "Malloc disk", 00:13:06.965 "block_size": 512, 00:13:06.965 "num_blocks": 65536, 00:13:06.965 "uuid": "40218ff1-73d4-4470-9a5f-63b78c3783c7", 00:13:06.965 "assigned_rate_limits": { 00:13:06.965 "rw_ios_per_sec": 0, 00:13:06.965 "rw_mbytes_per_sec": 0, 00:13:06.965 "r_mbytes_per_sec": 0, 00:13:06.965 "w_mbytes_per_sec": 0 00:13:06.965 }, 00:13:06.965 "claimed": true, 00:13:06.965 "claim_type": "exclusive_write", 00:13:06.965 "zoned": false, 00:13:06.965 "supported_io_types": { 00:13:06.965 "read": true, 00:13:06.965 "write": true, 00:13:06.965 "unmap": true, 00:13:06.965 "flush": true, 00:13:06.965 "reset": true, 00:13:06.965 "nvme_admin": false, 00:13:06.965 "nvme_io": false, 00:13:06.965 "nvme_io_md": false, 00:13:06.965 "write_zeroes": true, 00:13:06.965 "zcopy": true, 00:13:06.965 "get_zone_info": false, 00:13:06.965 "zone_management": false, 00:13:06.965 "zone_append": false, 00:13:06.965 "compare": false, 00:13:06.965 "compare_and_write": false, 00:13:06.965 "abort": true, 00:13:06.965 "seek_hole": false, 00:13:06.965 "seek_data": false, 00:13:06.965 "copy": true, 00:13:06.965 "nvme_iov_md": false 00:13:06.965 }, 00:13:06.965 "memory_domains": [ 00:13:06.965 { 00:13:06.965 "dma_device_id": "system", 00:13:06.965 "dma_device_type": 1 00:13:06.965 }, 00:13:06.965 { 00:13:06.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.965 "dma_device_type": 2 00:13:06.965 } 00:13:06.965 ], 00:13:06.965 "driver_specific": {} 00:13:06.965 } 00:13:06.965 ] 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.965 "name": "Existed_Raid", 00:13:06.965 "uuid": "100a97ff-ce42-4974-9743-8ece53c7ce9f", 00:13:06.965 "strip_size_kb": 64, 00:13:06.965 "state": "online", 00:13:06.965 "raid_level": "raid0", 00:13:06.965 "superblock": true, 00:13:06.965 "num_base_bdevs": 4, 00:13:06.965 "num_base_bdevs_discovered": 4, 00:13:06.965 "num_base_bdevs_operational": 4, 00:13:06.965 "base_bdevs_list": [ 00:13:06.965 { 00:13:06.965 "name": "NewBaseBdev", 00:13:06.965 "uuid": "40218ff1-73d4-4470-9a5f-63b78c3783c7", 00:13:06.965 "is_configured": true, 00:13:06.965 "data_offset": 2048, 00:13:06.965 "data_size": 63488 00:13:06.965 }, 00:13:06.965 { 00:13:06.965 "name": "BaseBdev2", 00:13:06.965 "uuid": "1cf56432-cd07-4fd8-9533-01e4f9071619", 00:13:06.965 "is_configured": true, 00:13:06.965 "data_offset": 2048, 00:13:06.965 "data_size": 63488 00:13:06.965 }, 00:13:06.965 { 00:13:06.965 "name": "BaseBdev3", 00:13:06.965 "uuid": "fb1032df-680b-4ce7-8a0e-ce1326665d41", 00:13:06.965 "is_configured": true, 00:13:06.965 "data_offset": 2048, 00:13:06.965 "data_size": 63488 00:13:06.965 }, 00:13:06.965 { 00:13:06.965 "name": "BaseBdev4", 00:13:06.965 "uuid": "861f387a-4cce-4133-8544-1dbb08dce12f", 00:13:06.965 "is_configured": true, 00:13:06.965 "data_offset": 2048, 00:13:06.965 "data_size": 63488 00:13:06.965 } 00:13:06.965 ] 00:13:06.965 }' 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.965 14:39:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:07.534 [2024-11-04 14:39:06.409234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.534 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:07.534 "name": "Existed_Raid", 00:13:07.534 "aliases": [ 00:13:07.534 "100a97ff-ce42-4974-9743-8ece53c7ce9f" 00:13:07.534 ], 00:13:07.534 "product_name": "Raid Volume", 00:13:07.534 "block_size": 512, 00:13:07.534 "num_blocks": 253952, 00:13:07.534 "uuid": "100a97ff-ce42-4974-9743-8ece53c7ce9f", 00:13:07.534 "assigned_rate_limits": { 00:13:07.534 "rw_ios_per_sec": 0, 00:13:07.534 "rw_mbytes_per_sec": 0, 00:13:07.534 "r_mbytes_per_sec": 0, 00:13:07.534 "w_mbytes_per_sec": 0 00:13:07.534 }, 00:13:07.534 "claimed": false, 00:13:07.534 "zoned": false, 00:13:07.534 "supported_io_types": { 00:13:07.534 "read": true, 00:13:07.534 "write": true, 00:13:07.534 "unmap": true, 00:13:07.534 "flush": true, 00:13:07.534 "reset": true, 00:13:07.534 "nvme_admin": false, 00:13:07.534 "nvme_io": false, 00:13:07.534 "nvme_io_md": false, 00:13:07.534 "write_zeroes": true, 00:13:07.534 "zcopy": false, 00:13:07.534 "get_zone_info": false, 00:13:07.534 "zone_management": false, 00:13:07.535 "zone_append": false, 00:13:07.535 "compare": false, 00:13:07.535 "compare_and_write": false, 00:13:07.535 "abort": false, 00:13:07.535 "seek_hole": false, 00:13:07.535 "seek_data": false, 00:13:07.535 "copy": false, 00:13:07.535 "nvme_iov_md": false 00:13:07.535 }, 00:13:07.535 "memory_domains": [ 00:13:07.535 { 00:13:07.535 "dma_device_id": "system", 00:13:07.535 "dma_device_type": 1 00:13:07.535 }, 00:13:07.535 { 00:13:07.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.535 "dma_device_type": 2 00:13:07.535 }, 00:13:07.535 { 00:13:07.535 "dma_device_id": "system", 00:13:07.535 "dma_device_type": 1 00:13:07.535 }, 00:13:07.535 { 00:13:07.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.535 "dma_device_type": 2 00:13:07.535 }, 00:13:07.535 { 00:13:07.535 "dma_device_id": "system", 00:13:07.535 "dma_device_type": 1 00:13:07.535 }, 00:13:07.535 { 00:13:07.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.535 "dma_device_type": 2 00:13:07.535 }, 00:13:07.535 { 00:13:07.535 "dma_device_id": "system", 00:13:07.535 "dma_device_type": 1 00:13:07.535 }, 00:13:07.535 { 00:13:07.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.535 "dma_device_type": 2 00:13:07.535 } 00:13:07.535 ], 00:13:07.535 "driver_specific": { 00:13:07.535 "raid": { 00:13:07.535 "uuid": "100a97ff-ce42-4974-9743-8ece53c7ce9f", 00:13:07.535 "strip_size_kb": 64, 00:13:07.535 "state": "online", 00:13:07.535 "raid_level": "raid0", 00:13:07.535 "superblock": true, 00:13:07.535 "num_base_bdevs": 4, 00:13:07.535 "num_base_bdevs_discovered": 4, 00:13:07.535 "num_base_bdevs_operational": 4, 00:13:07.535 "base_bdevs_list": [ 00:13:07.535 { 00:13:07.535 "name": "NewBaseBdev", 00:13:07.535 "uuid": "40218ff1-73d4-4470-9a5f-63b78c3783c7", 00:13:07.535 "is_configured": true, 00:13:07.535 "data_offset": 2048, 00:13:07.535 "data_size": 63488 00:13:07.535 }, 00:13:07.535 { 00:13:07.535 "name": "BaseBdev2", 00:13:07.535 "uuid": "1cf56432-cd07-4fd8-9533-01e4f9071619", 00:13:07.535 "is_configured": true, 00:13:07.535 "data_offset": 2048, 00:13:07.535 "data_size": 63488 00:13:07.535 }, 00:13:07.535 { 00:13:07.535 "name": "BaseBdev3", 00:13:07.535 "uuid": "fb1032df-680b-4ce7-8a0e-ce1326665d41", 00:13:07.535 "is_configured": true, 00:13:07.535 "data_offset": 2048, 00:13:07.535 "data_size": 63488 00:13:07.535 }, 00:13:07.535 { 00:13:07.535 "name": "BaseBdev4", 00:13:07.535 "uuid": "861f387a-4cce-4133-8544-1dbb08dce12f", 00:13:07.535 "is_configured": true, 00:13:07.535 "data_offset": 2048, 00:13:07.535 "data_size": 63488 00:13:07.535 } 00:13:07.535 ] 00:13:07.535 } 00:13:07.535 } 00:13:07.535 }' 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:07.535 BaseBdev2 00:13:07.535 BaseBdev3 00:13:07.535 BaseBdev4' 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.535 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.793 [2024-11-04 14:39:06.804873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:07.793 [2024-11-04 14:39:06.804912] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.793 [2024-11-04 14:39:06.805043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.793 [2024-11-04 14:39:06.805141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.793 [2024-11-04 14:39:06.805157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70134 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70134 ']' 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70134 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70134 00:13:07.793 killing process with pid 70134 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70134' 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70134 00:13:07.793 [2024-11-04 14:39:06.848003] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.793 14:39:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70134 00:13:08.359 [2024-11-04 14:39:07.206819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.296 14:39:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:09.296 00:13:09.296 real 0m12.914s 00:13:09.296 user 0m21.457s 00:13:09.296 sys 0m1.795s 00:13:09.296 14:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:09.296 14:39:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.296 ************************************ 00:13:09.296 END TEST raid_state_function_test_sb 00:13:09.296 ************************************ 00:13:09.296 14:39:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:09.296 14:39:08 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:09.296 14:39:08 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:09.296 14:39:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.296 ************************************ 00:13:09.296 START TEST raid_superblock_test 00:13:09.296 ************************************ 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70819 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70819 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70819 ']' 00:13:09.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:09.296 14:39:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.296 [2024-11-04 14:39:08.406089] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:13:09.296 [2024-11-04 14:39:08.406368] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70819 ] 00:13:09.555 [2024-11-04 14:39:08.614310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.814 [2024-11-04 14:39:08.749140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.072 [2024-11-04 14:39:08.956670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.072 [2024-11-04 14:39:08.956865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.330 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:10.330 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:10.330 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:10.330 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.331 malloc1 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.331 [2024-11-04 14:39:09.378456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:10.331 [2024-11-04 14:39:09.378717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.331 [2024-11-04 14:39:09.378759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:10.331 [2024-11-04 14:39:09.378776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.331 [2024-11-04 14:39:09.381905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.331 [2024-11-04 14:39:09.381978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:10.331 pt1 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.331 malloc2 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.331 [2024-11-04 14:39:09.437012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:10.331 [2024-11-04 14:39:09.437249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.331 [2024-11-04 14:39:09.437415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:10.331 [2024-11-04 14:39:09.437549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.331 [2024-11-04 14:39:09.440464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.331 [2024-11-04 14:39:09.440673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:10.331 pt2 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.331 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.590 malloc3 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.590 [2024-11-04 14:39:09.505116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:10.590 [2024-11-04 14:39:09.505180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.590 [2024-11-04 14:39:09.505212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:10.590 [2024-11-04 14:39:09.505227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.590 [2024-11-04 14:39:09.508292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.590 [2024-11-04 14:39:09.508339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:10.590 pt3 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.590 malloc4 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.590 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.591 [2024-11-04 14:39:09.560188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:10.591 [2024-11-04 14:39:09.560282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.591 [2024-11-04 14:39:09.560313] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:10.591 [2024-11-04 14:39:09.560329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.591 [2024-11-04 14:39:09.563343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.591 [2024-11-04 14:39:09.563385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:10.591 pt4 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.591 [2024-11-04 14:39:09.572237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:10.591 [2024-11-04 14:39:09.574887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:10.591 [2024-11-04 14:39:09.575231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:10.591 [2024-11-04 14:39:09.575345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:10.591 [2024-11-04 14:39:09.575637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:10.591 [2024-11-04 14:39:09.575657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:10.591 [2024-11-04 14:39:09.576058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:10.591 [2024-11-04 14:39:09.576287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:10.591 [2024-11-04 14:39:09.576309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:10.591 [2024-11-04 14:39:09.576565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.591 "name": "raid_bdev1", 00:13:10.591 "uuid": "8f994166-7da7-488e-ab21-39ec55c78217", 00:13:10.591 "strip_size_kb": 64, 00:13:10.591 "state": "online", 00:13:10.591 "raid_level": "raid0", 00:13:10.591 "superblock": true, 00:13:10.591 "num_base_bdevs": 4, 00:13:10.591 "num_base_bdevs_discovered": 4, 00:13:10.591 "num_base_bdevs_operational": 4, 00:13:10.591 "base_bdevs_list": [ 00:13:10.591 { 00:13:10.591 "name": "pt1", 00:13:10.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.591 "is_configured": true, 00:13:10.591 "data_offset": 2048, 00:13:10.591 "data_size": 63488 00:13:10.591 }, 00:13:10.591 { 00:13:10.591 "name": "pt2", 00:13:10.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.591 "is_configured": true, 00:13:10.591 "data_offset": 2048, 00:13:10.591 "data_size": 63488 00:13:10.591 }, 00:13:10.591 { 00:13:10.591 "name": "pt3", 00:13:10.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.591 "is_configured": true, 00:13:10.591 "data_offset": 2048, 00:13:10.591 "data_size": 63488 00:13:10.591 }, 00:13:10.591 { 00:13:10.591 "name": "pt4", 00:13:10.591 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:10.591 "is_configured": true, 00:13:10.591 "data_offset": 2048, 00:13:10.591 "data_size": 63488 00:13:10.591 } 00:13:10.591 ] 00:13:10.591 }' 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.591 14:39:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.157 [2024-11-04 14:39:10.085166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.157 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:11.157 "name": "raid_bdev1", 00:13:11.157 "aliases": [ 00:13:11.157 "8f994166-7da7-488e-ab21-39ec55c78217" 00:13:11.157 ], 00:13:11.157 "product_name": "Raid Volume", 00:13:11.157 "block_size": 512, 00:13:11.158 "num_blocks": 253952, 00:13:11.158 "uuid": "8f994166-7da7-488e-ab21-39ec55c78217", 00:13:11.158 "assigned_rate_limits": { 00:13:11.158 "rw_ios_per_sec": 0, 00:13:11.158 "rw_mbytes_per_sec": 0, 00:13:11.158 "r_mbytes_per_sec": 0, 00:13:11.158 "w_mbytes_per_sec": 0 00:13:11.158 }, 00:13:11.158 "claimed": false, 00:13:11.158 "zoned": false, 00:13:11.158 "supported_io_types": { 00:13:11.158 "read": true, 00:13:11.158 "write": true, 00:13:11.158 "unmap": true, 00:13:11.158 "flush": true, 00:13:11.158 "reset": true, 00:13:11.158 "nvme_admin": false, 00:13:11.158 "nvme_io": false, 00:13:11.158 "nvme_io_md": false, 00:13:11.158 "write_zeroes": true, 00:13:11.158 "zcopy": false, 00:13:11.158 "get_zone_info": false, 00:13:11.158 "zone_management": false, 00:13:11.158 "zone_append": false, 00:13:11.158 "compare": false, 00:13:11.158 "compare_and_write": false, 00:13:11.158 "abort": false, 00:13:11.158 "seek_hole": false, 00:13:11.158 "seek_data": false, 00:13:11.158 "copy": false, 00:13:11.158 "nvme_iov_md": false 00:13:11.158 }, 00:13:11.158 "memory_domains": [ 00:13:11.158 { 00:13:11.158 "dma_device_id": "system", 00:13:11.158 "dma_device_type": 1 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.158 "dma_device_type": 2 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "dma_device_id": "system", 00:13:11.158 "dma_device_type": 1 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.158 "dma_device_type": 2 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "dma_device_id": "system", 00:13:11.158 "dma_device_type": 1 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.158 "dma_device_type": 2 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "dma_device_id": "system", 00:13:11.158 "dma_device_type": 1 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.158 "dma_device_type": 2 00:13:11.158 } 00:13:11.158 ], 00:13:11.158 "driver_specific": { 00:13:11.158 "raid": { 00:13:11.158 "uuid": "8f994166-7da7-488e-ab21-39ec55c78217", 00:13:11.158 "strip_size_kb": 64, 00:13:11.158 "state": "online", 00:13:11.158 "raid_level": "raid0", 00:13:11.158 "superblock": true, 00:13:11.158 "num_base_bdevs": 4, 00:13:11.158 "num_base_bdevs_discovered": 4, 00:13:11.158 "num_base_bdevs_operational": 4, 00:13:11.158 "base_bdevs_list": [ 00:13:11.158 { 00:13:11.158 "name": "pt1", 00:13:11.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.158 "is_configured": true, 00:13:11.158 "data_offset": 2048, 00:13:11.158 "data_size": 63488 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "name": "pt2", 00:13:11.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.158 "is_configured": true, 00:13:11.158 "data_offset": 2048, 00:13:11.158 "data_size": 63488 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "name": "pt3", 00:13:11.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.158 "is_configured": true, 00:13:11.158 "data_offset": 2048, 00:13:11.158 "data_size": 63488 00:13:11.158 }, 00:13:11.158 { 00:13:11.158 "name": "pt4", 00:13:11.158 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.158 "is_configured": true, 00:13:11.158 "data_offset": 2048, 00:13:11.158 "data_size": 63488 00:13:11.158 } 00:13:11.158 ] 00:13:11.158 } 00:13:11.158 } 00:13:11.158 }' 00:13:11.158 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:11.158 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:11.158 pt2 00:13:11.158 pt3 00:13:11.158 pt4' 00:13:11.158 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.158 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:11.158 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.158 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:11.158 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.158 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.158 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.158 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:11.417 [2024-11-04 14:39:10.469283] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8f994166-7da7-488e-ab21-39ec55c78217 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8f994166-7da7-488e-ab21-39ec55c78217 ']' 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.417 [2024-11-04 14:39:10.520873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.417 [2024-11-04 14:39:10.520914] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.417 [2024-11-04 14:39:10.521046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.417 [2024-11-04 14:39:10.521146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.417 [2024-11-04 14:39:10.521178] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.417 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.418 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.418 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.418 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:11.418 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 [2024-11-04 14:39:10.684941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:11.677 [2024-11-04 14:39:10.687515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:11.677 [2024-11-04 14:39:10.687593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:11.677 [2024-11-04 14:39:10.687651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:11.677 [2024-11-04 14:39:10.687739] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:11.677 [2024-11-04 14:39:10.687815] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:11.677 [2024-11-04 14:39:10.687850] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:11.677 [2024-11-04 14:39:10.687883] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:11.677 [2024-11-04 14:39:10.687907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.677 [2024-11-04 14:39:10.687946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:11.677 request: 00:13:11.677 { 00:13:11.677 "name": "raid_bdev1", 00:13:11.677 "raid_level": "raid0", 00:13:11.677 "base_bdevs": [ 00:13:11.677 "malloc1", 00:13:11.677 "malloc2", 00:13:11.677 "malloc3", 00:13:11.677 "malloc4" 00:13:11.677 ], 00:13:11.677 "strip_size_kb": 64, 00:13:11.677 "superblock": false, 00:13:11.677 "method": "bdev_raid_create", 00:13:11.677 "req_id": 1 00:13:11.677 } 00:13:11.677 Got JSON-RPC error response 00:13:11.677 response: 00:13:11.677 { 00:13:11.677 "code": -17, 00:13:11.677 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:11.677 } 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.677 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.677 [2024-11-04 14:39:10.752955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:11.677 [2024-11-04 14:39:10.753041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.677 [2024-11-04 14:39:10.753070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:11.677 [2024-11-04 14:39:10.753087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.677 [2024-11-04 14:39:10.756089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.677 [2024-11-04 14:39:10.756143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:11.678 [2024-11-04 14:39:10.756249] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:11.678 [2024-11-04 14:39:10.756332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:11.678 pt1 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.678 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.937 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.937 "name": "raid_bdev1", 00:13:11.937 "uuid": "8f994166-7da7-488e-ab21-39ec55c78217", 00:13:11.937 "strip_size_kb": 64, 00:13:11.937 "state": "configuring", 00:13:11.937 "raid_level": "raid0", 00:13:11.937 "superblock": true, 00:13:11.937 "num_base_bdevs": 4, 00:13:11.937 "num_base_bdevs_discovered": 1, 00:13:11.937 "num_base_bdevs_operational": 4, 00:13:11.937 "base_bdevs_list": [ 00:13:11.937 { 00:13:11.937 "name": "pt1", 00:13:11.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.938 "is_configured": true, 00:13:11.938 "data_offset": 2048, 00:13:11.938 "data_size": 63488 00:13:11.938 }, 00:13:11.938 { 00:13:11.938 "name": null, 00:13:11.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.938 "is_configured": false, 00:13:11.938 "data_offset": 2048, 00:13:11.938 "data_size": 63488 00:13:11.938 }, 00:13:11.938 { 00:13:11.938 "name": null, 00:13:11.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.938 "is_configured": false, 00:13:11.938 "data_offset": 2048, 00:13:11.938 "data_size": 63488 00:13:11.938 }, 00:13:11.938 { 00:13:11.938 "name": null, 00:13:11.938 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:11.938 "is_configured": false, 00:13:11.938 "data_offset": 2048, 00:13:11.938 "data_size": 63488 00:13:11.938 } 00:13:11.938 ] 00:13:11.938 }' 00:13:11.938 14:39:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.938 14:39:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.197 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:12.197 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.197 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.197 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.197 [2024-11-04 14:39:11.313173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.197 [2024-11-04 14:39:11.313307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.197 [2024-11-04 14:39:11.313349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:12.197 [2024-11-04 14:39:11.313366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.197 [2024-11-04 14:39:11.313972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.197 [2024-11-04 14:39:11.314016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.197 [2024-11-04 14:39:11.314117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:12.197 [2024-11-04 14:39:11.314155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.197 pt2 00:13:12.197 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.197 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:12.197 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.197 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.475 [2024-11-04 14:39:11.321153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.475 "name": "raid_bdev1", 00:13:12.475 "uuid": "8f994166-7da7-488e-ab21-39ec55c78217", 00:13:12.475 "strip_size_kb": 64, 00:13:12.475 "state": "configuring", 00:13:12.475 "raid_level": "raid0", 00:13:12.475 "superblock": true, 00:13:12.475 "num_base_bdevs": 4, 00:13:12.475 "num_base_bdevs_discovered": 1, 00:13:12.475 "num_base_bdevs_operational": 4, 00:13:12.475 "base_bdevs_list": [ 00:13:12.475 { 00:13:12.475 "name": "pt1", 00:13:12.475 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.475 "is_configured": true, 00:13:12.475 "data_offset": 2048, 00:13:12.475 "data_size": 63488 00:13:12.475 }, 00:13:12.475 { 00:13:12.475 "name": null, 00:13:12.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.475 "is_configured": false, 00:13:12.475 "data_offset": 0, 00:13:12.475 "data_size": 63488 00:13:12.475 }, 00:13:12.475 { 00:13:12.475 "name": null, 00:13:12.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.475 "is_configured": false, 00:13:12.475 "data_offset": 2048, 00:13:12.475 "data_size": 63488 00:13:12.475 }, 00:13:12.475 { 00:13:12.475 "name": null, 00:13:12.475 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:12.475 "is_configured": false, 00:13:12.475 "data_offset": 2048, 00:13:12.475 "data_size": 63488 00:13:12.475 } 00:13:12.475 ] 00:13:12.475 }' 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.475 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.044 [2024-11-04 14:39:11.877370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:13.044 [2024-11-04 14:39:11.877475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.044 [2024-11-04 14:39:11.877506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:13.044 [2024-11-04 14:39:11.877522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.044 [2024-11-04 14:39:11.878111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.044 [2024-11-04 14:39:11.878148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:13.044 [2024-11-04 14:39:11.878265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:13.044 [2024-11-04 14:39:11.878303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.044 pt2 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.044 [2024-11-04 14:39:11.885331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:13.044 [2024-11-04 14:39:11.885389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.044 [2024-11-04 14:39:11.885423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:13.044 [2024-11-04 14:39:11.885439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.044 [2024-11-04 14:39:11.885880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.044 [2024-11-04 14:39:11.885937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:13.044 [2024-11-04 14:39:11.886023] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:13.044 [2024-11-04 14:39:11.886052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:13.044 pt3 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.044 [2024-11-04 14:39:11.893318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:13.044 [2024-11-04 14:39:11.893378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.044 [2024-11-04 14:39:11.893407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:13.044 [2024-11-04 14:39:11.893421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.044 [2024-11-04 14:39:11.893872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.044 [2024-11-04 14:39:11.893908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:13.044 [2024-11-04 14:39:11.894018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:13.044 [2024-11-04 14:39:11.894048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:13.044 [2024-11-04 14:39:11.894223] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:13.044 [2024-11-04 14:39:11.894250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:13.044 [2024-11-04 14:39:11.894561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:13.044 [2024-11-04 14:39:11.894774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:13.044 [2024-11-04 14:39:11.894803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:13.044 [2024-11-04 14:39:11.894989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.044 pt4 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.044 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.044 "name": "raid_bdev1", 00:13:13.044 "uuid": "8f994166-7da7-488e-ab21-39ec55c78217", 00:13:13.044 "strip_size_kb": 64, 00:13:13.044 "state": "online", 00:13:13.044 "raid_level": "raid0", 00:13:13.044 "superblock": true, 00:13:13.044 "num_base_bdevs": 4, 00:13:13.044 "num_base_bdevs_discovered": 4, 00:13:13.044 "num_base_bdevs_operational": 4, 00:13:13.044 "base_bdevs_list": [ 00:13:13.044 { 00:13:13.044 "name": "pt1", 00:13:13.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.044 "is_configured": true, 00:13:13.044 "data_offset": 2048, 00:13:13.044 "data_size": 63488 00:13:13.044 }, 00:13:13.044 { 00:13:13.044 "name": "pt2", 00:13:13.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.044 "is_configured": true, 00:13:13.044 "data_offset": 2048, 00:13:13.044 "data_size": 63488 00:13:13.044 }, 00:13:13.044 { 00:13:13.044 "name": "pt3", 00:13:13.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.044 "is_configured": true, 00:13:13.044 "data_offset": 2048, 00:13:13.044 "data_size": 63488 00:13:13.044 }, 00:13:13.044 { 00:13:13.044 "name": "pt4", 00:13:13.044 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.044 "is_configured": true, 00:13:13.044 "data_offset": 2048, 00:13:13.045 "data_size": 63488 00:13:13.045 } 00:13:13.045 ] 00:13:13.045 }' 00:13:13.045 14:39:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.045 14:39:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.304 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:13.304 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:13.304 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:13.304 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:13.304 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:13.304 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:13.304 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.304 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:13.304 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.304 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.563 [2024-11-04 14:39:12.425914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.563 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.563 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:13.563 "name": "raid_bdev1", 00:13:13.563 "aliases": [ 00:13:13.563 "8f994166-7da7-488e-ab21-39ec55c78217" 00:13:13.563 ], 00:13:13.563 "product_name": "Raid Volume", 00:13:13.563 "block_size": 512, 00:13:13.563 "num_blocks": 253952, 00:13:13.563 "uuid": "8f994166-7da7-488e-ab21-39ec55c78217", 00:13:13.563 "assigned_rate_limits": { 00:13:13.563 "rw_ios_per_sec": 0, 00:13:13.563 "rw_mbytes_per_sec": 0, 00:13:13.563 "r_mbytes_per_sec": 0, 00:13:13.563 "w_mbytes_per_sec": 0 00:13:13.563 }, 00:13:13.563 "claimed": false, 00:13:13.563 "zoned": false, 00:13:13.563 "supported_io_types": { 00:13:13.563 "read": true, 00:13:13.563 "write": true, 00:13:13.563 "unmap": true, 00:13:13.563 "flush": true, 00:13:13.563 "reset": true, 00:13:13.563 "nvme_admin": false, 00:13:13.563 "nvme_io": false, 00:13:13.563 "nvme_io_md": false, 00:13:13.563 "write_zeroes": true, 00:13:13.563 "zcopy": false, 00:13:13.563 "get_zone_info": false, 00:13:13.563 "zone_management": false, 00:13:13.563 "zone_append": false, 00:13:13.563 "compare": false, 00:13:13.563 "compare_and_write": false, 00:13:13.563 "abort": false, 00:13:13.563 "seek_hole": false, 00:13:13.563 "seek_data": false, 00:13:13.563 "copy": false, 00:13:13.563 "nvme_iov_md": false 00:13:13.563 }, 00:13:13.563 "memory_domains": [ 00:13:13.563 { 00:13:13.563 "dma_device_id": "system", 00:13:13.563 "dma_device_type": 1 00:13:13.563 }, 00:13:13.563 { 00:13:13.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.563 "dma_device_type": 2 00:13:13.563 }, 00:13:13.564 { 00:13:13.564 "dma_device_id": "system", 00:13:13.564 "dma_device_type": 1 00:13:13.564 }, 00:13:13.564 { 00:13:13.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.564 "dma_device_type": 2 00:13:13.564 }, 00:13:13.564 { 00:13:13.564 "dma_device_id": "system", 00:13:13.564 "dma_device_type": 1 00:13:13.564 }, 00:13:13.564 { 00:13:13.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.564 "dma_device_type": 2 00:13:13.564 }, 00:13:13.564 { 00:13:13.564 "dma_device_id": "system", 00:13:13.564 "dma_device_type": 1 00:13:13.564 }, 00:13:13.564 { 00:13:13.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.564 "dma_device_type": 2 00:13:13.564 } 00:13:13.564 ], 00:13:13.564 "driver_specific": { 00:13:13.564 "raid": { 00:13:13.564 "uuid": "8f994166-7da7-488e-ab21-39ec55c78217", 00:13:13.564 "strip_size_kb": 64, 00:13:13.564 "state": "online", 00:13:13.564 "raid_level": "raid0", 00:13:13.564 "superblock": true, 00:13:13.564 "num_base_bdevs": 4, 00:13:13.564 "num_base_bdevs_discovered": 4, 00:13:13.564 "num_base_bdevs_operational": 4, 00:13:13.564 "base_bdevs_list": [ 00:13:13.564 { 00:13:13.564 "name": "pt1", 00:13:13.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.564 "is_configured": true, 00:13:13.564 "data_offset": 2048, 00:13:13.564 "data_size": 63488 00:13:13.564 }, 00:13:13.564 { 00:13:13.564 "name": "pt2", 00:13:13.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.564 "is_configured": true, 00:13:13.564 "data_offset": 2048, 00:13:13.564 "data_size": 63488 00:13:13.564 }, 00:13:13.564 { 00:13:13.564 "name": "pt3", 00:13:13.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.564 "is_configured": true, 00:13:13.564 "data_offset": 2048, 00:13:13.564 "data_size": 63488 00:13:13.564 }, 00:13:13.564 { 00:13:13.564 "name": "pt4", 00:13:13.564 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:13.564 "is_configured": true, 00:13:13.564 "data_offset": 2048, 00:13:13.564 "data_size": 63488 00:13:13.564 } 00:13:13.564 ] 00:13:13.564 } 00:13:13.564 } 00:13:13.564 }' 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:13.564 pt2 00:13:13.564 pt3 00:13:13.564 pt4' 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.564 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:13.823 [2024-11-04 14:39:12.790003] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8f994166-7da7-488e-ab21-39ec55c78217 '!=' 8f994166-7da7-488e-ab21-39ec55c78217 ']' 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70819 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70819 ']' 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70819 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70819 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:13.823 killing process with pid 70819 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70819' 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70819 00:13:13.823 [2024-11-04 14:39:12.870696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.823 14:39:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70819 00:13:13.823 [2024-11-04 14:39:12.870807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.823 [2024-11-04 14:39:12.870904] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.823 [2024-11-04 14:39:12.870945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:14.390 [2024-11-04 14:39:13.230539] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.326 14:39:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:15.326 00:13:15.326 real 0m5.962s 00:13:15.326 user 0m8.957s 00:13:15.326 sys 0m0.910s 00:13:15.326 14:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:15.326 14:39:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.326 ************************************ 00:13:15.326 END TEST raid_superblock_test 00:13:15.326 ************************************ 00:13:15.326 14:39:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:15.326 14:39:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:15.326 14:39:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:15.326 14:39:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.326 ************************************ 00:13:15.326 START TEST raid_read_error_test 00:13:15.326 ************************************ 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:15.326 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Qz22oRtRWM 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71086 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71086 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71086 ']' 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:15.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:15.327 14:39:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.327 [2024-11-04 14:39:14.427075] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:13:15.327 [2024-11-04 14:39:14.427274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71086 ] 00:13:15.586 [2024-11-04 14:39:14.612859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.866 [2024-11-04 14:39:14.744016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.866 [2024-11-04 14:39:14.946936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.866 [2024-11-04 14:39:14.947028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.434 BaseBdev1_malloc 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.434 true 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.434 [2024-11-04 14:39:15.468495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:16.434 [2024-11-04 14:39:15.468570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.434 [2024-11-04 14:39:15.468602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:16.434 [2024-11-04 14:39:15.468620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.434 [2024-11-04 14:39:15.471519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.434 [2024-11-04 14:39:15.471575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.434 BaseBdev1 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.434 BaseBdev2_malloc 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.434 true 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.434 [2024-11-04 14:39:15.532522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:16.434 [2024-11-04 14:39:15.532590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.434 [2024-11-04 14:39:15.532616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:16.434 [2024-11-04 14:39:15.532633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.434 [2024-11-04 14:39:15.535402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.434 [2024-11-04 14:39:15.535454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:16.434 BaseBdev2 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.434 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.694 BaseBdev3_malloc 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.694 true 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.694 [2024-11-04 14:39:15.606548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:16.694 [2024-11-04 14:39:15.606622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.694 [2024-11-04 14:39:15.606649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:16.694 [2024-11-04 14:39:15.606667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.694 [2024-11-04 14:39:15.609470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.694 [2024-11-04 14:39:15.609519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:16.694 BaseBdev3 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.694 BaseBdev4_malloc 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.694 true 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.694 [2024-11-04 14:39:15.666477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:16.694 [2024-11-04 14:39:15.666551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.694 [2024-11-04 14:39:15.666580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:16.694 [2024-11-04 14:39:15.666597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.694 [2024-11-04 14:39:15.669321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.694 [2024-11-04 14:39:15.669374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:16.694 BaseBdev4 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.694 [2024-11-04 14:39:15.674564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.694 [2024-11-04 14:39:15.676987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.694 [2024-11-04 14:39:15.677096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.694 [2024-11-04 14:39:15.677202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:16.694 [2024-11-04 14:39:15.677497] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:16.694 [2024-11-04 14:39:15.677537] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:16.694 [2024-11-04 14:39:15.677861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:16.694 [2024-11-04 14:39:15.678124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:16.694 [2024-11-04 14:39:15.678154] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:16.694 [2024-11-04 14:39:15.678356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.694 "name": "raid_bdev1", 00:13:16.694 "uuid": "faa0fa10-72cd-4f0f-ba5e-b225f5ab33e6", 00:13:16.694 "strip_size_kb": 64, 00:13:16.694 "state": "online", 00:13:16.694 "raid_level": "raid0", 00:13:16.694 "superblock": true, 00:13:16.694 "num_base_bdevs": 4, 00:13:16.694 "num_base_bdevs_discovered": 4, 00:13:16.694 "num_base_bdevs_operational": 4, 00:13:16.694 "base_bdevs_list": [ 00:13:16.694 { 00:13:16.694 "name": "BaseBdev1", 00:13:16.694 "uuid": "1f645eac-107b-5b37-be24-b256089ec790", 00:13:16.694 "is_configured": true, 00:13:16.694 "data_offset": 2048, 00:13:16.694 "data_size": 63488 00:13:16.694 }, 00:13:16.694 { 00:13:16.694 "name": "BaseBdev2", 00:13:16.694 "uuid": "0dc48a5c-c2f1-5c4e-8c56-b9b8cc2ac7f0", 00:13:16.694 "is_configured": true, 00:13:16.694 "data_offset": 2048, 00:13:16.694 "data_size": 63488 00:13:16.694 }, 00:13:16.694 { 00:13:16.694 "name": "BaseBdev3", 00:13:16.694 "uuid": "8f742c84-c2c8-5599-89a3-84516b852d57", 00:13:16.694 "is_configured": true, 00:13:16.694 "data_offset": 2048, 00:13:16.694 "data_size": 63488 00:13:16.694 }, 00:13:16.694 { 00:13:16.694 "name": "BaseBdev4", 00:13:16.694 "uuid": "c99a8830-77b1-50d0-b668-6a8b6f273094", 00:13:16.694 "is_configured": true, 00:13:16.694 "data_offset": 2048, 00:13:16.694 "data_size": 63488 00:13:16.694 } 00:13:16.694 ] 00:13:16.694 }' 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.694 14:39:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.262 14:39:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:17.262 14:39:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:17.262 [2024-11-04 14:39:16.336156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.200 "name": "raid_bdev1", 00:13:18.200 "uuid": "faa0fa10-72cd-4f0f-ba5e-b225f5ab33e6", 00:13:18.200 "strip_size_kb": 64, 00:13:18.200 "state": "online", 00:13:18.200 "raid_level": "raid0", 00:13:18.200 "superblock": true, 00:13:18.200 "num_base_bdevs": 4, 00:13:18.200 "num_base_bdevs_discovered": 4, 00:13:18.200 "num_base_bdevs_operational": 4, 00:13:18.200 "base_bdevs_list": [ 00:13:18.200 { 00:13:18.200 "name": "BaseBdev1", 00:13:18.200 "uuid": "1f645eac-107b-5b37-be24-b256089ec790", 00:13:18.200 "is_configured": true, 00:13:18.200 "data_offset": 2048, 00:13:18.200 "data_size": 63488 00:13:18.200 }, 00:13:18.200 { 00:13:18.200 "name": "BaseBdev2", 00:13:18.200 "uuid": "0dc48a5c-c2f1-5c4e-8c56-b9b8cc2ac7f0", 00:13:18.200 "is_configured": true, 00:13:18.200 "data_offset": 2048, 00:13:18.200 "data_size": 63488 00:13:18.200 }, 00:13:18.200 { 00:13:18.200 "name": "BaseBdev3", 00:13:18.200 "uuid": "8f742c84-c2c8-5599-89a3-84516b852d57", 00:13:18.200 "is_configured": true, 00:13:18.200 "data_offset": 2048, 00:13:18.200 "data_size": 63488 00:13:18.200 }, 00:13:18.200 { 00:13:18.200 "name": "BaseBdev4", 00:13:18.200 "uuid": "c99a8830-77b1-50d0-b668-6a8b6f273094", 00:13:18.200 "is_configured": true, 00:13:18.200 "data_offset": 2048, 00:13:18.200 "data_size": 63488 00:13:18.200 } 00:13:18.200 ] 00:13:18.200 }' 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.200 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.766 [2024-11-04 14:39:17.756126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.766 [2024-11-04 14:39:17.756171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.766 [2024-11-04 14:39:17.759536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.766 [2024-11-04 14:39:17.759632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.766 [2024-11-04 14:39:17.759692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.766 [2024-11-04 14:39:17.759713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:18.766 { 00:13:18.766 "results": [ 00:13:18.766 { 00:13:18.766 "job": "raid_bdev1", 00:13:18.766 "core_mask": "0x1", 00:13:18.766 "workload": "randrw", 00:13:18.766 "percentage": 50, 00:13:18.766 "status": "finished", 00:13:18.766 "queue_depth": 1, 00:13:18.766 "io_size": 131072, 00:13:18.766 "runtime": 1.417459, 00:13:18.766 "iops": 10463.794719988373, 00:13:18.766 "mibps": 1307.9743399985466, 00:13:18.766 "io_failed": 1, 00:13:18.766 "io_timeout": 0, 00:13:18.766 "avg_latency_us": 133.85488817930536, 00:13:18.766 "min_latency_us": 41.192727272727275, 00:13:18.766 "max_latency_us": 1980.9745454545455 00:13:18.766 } 00:13:18.766 ], 00:13:18.766 "core_count": 1 00:13:18.766 } 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71086 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71086 ']' 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71086 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71086 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:18.766 killing process with pid 71086 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71086' 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71086 00:13:18.766 [2024-11-04 14:39:17.797940] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.766 14:39:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71086 00:13:19.025 [2024-11-04 14:39:18.091759] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.402 14:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Qz22oRtRWM 00:13:20.402 14:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:20.402 14:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:20.402 14:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:20.402 14:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:20.402 14:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:20.402 14:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:20.402 14:39:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:20.402 00:13:20.402 real 0m4.868s 00:13:20.402 user 0m6.038s 00:13:20.402 sys 0m0.598s 00:13:20.402 14:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:20.402 14:39:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.402 ************************************ 00:13:20.402 END TEST raid_read_error_test 00:13:20.402 ************************************ 00:13:20.402 14:39:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:20.402 14:39:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:20.402 14:39:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:20.402 14:39:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.402 ************************************ 00:13:20.402 START TEST raid_write_error_test 00:13:20.402 ************************************ 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZB2vSkxpih 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71238 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71238 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71238 ']' 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:20.402 14:39:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.402 [2024-11-04 14:39:19.329567] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:13:20.402 [2024-11-04 14:39:19.329749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71238 ] 00:13:20.402 [2024-11-04 14:39:19.504786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.698 [2024-11-04 14:39:19.635391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.957 [2024-11-04 14:39:19.837011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.957 [2024-11-04 14:39:19.837091] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 BaseBdev1_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 true 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 [2024-11-04 14:39:20.397874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:21.526 [2024-11-04 14:39:20.397966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.526 [2024-11-04 14:39:20.397999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:21.526 [2024-11-04 14:39:20.398018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.526 [2024-11-04 14:39:20.400802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.526 [2024-11-04 14:39:20.400852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:21.526 BaseBdev1 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 BaseBdev2_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 true 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 [2024-11-04 14:39:20.454037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:21.526 [2024-11-04 14:39:20.454110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.526 [2024-11-04 14:39:20.454135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:21.526 [2024-11-04 14:39:20.454152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.526 [2024-11-04 14:39:20.456881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.526 [2024-11-04 14:39:20.456948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:21.526 BaseBdev2 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 BaseBdev3_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 true 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 [2024-11-04 14:39:20.521121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:21.526 [2024-11-04 14:39:20.521191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.526 [2024-11-04 14:39:20.521218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:21.526 [2024-11-04 14:39:20.521236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.526 [2024-11-04 14:39:20.524018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.526 [2024-11-04 14:39:20.524072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:21.526 BaseBdev3 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 BaseBdev4_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 true 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.526 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.526 [2024-11-04 14:39:20.581073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:21.526 [2024-11-04 14:39:20.581141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.527 [2024-11-04 14:39:20.581169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:21.527 [2024-11-04 14:39:20.581187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.527 [2024-11-04 14:39:20.583962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.527 [2024-11-04 14:39:20.584034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:21.527 BaseBdev4 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.527 [2024-11-04 14:39:20.593167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.527 [2024-11-04 14:39:20.595650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:21.527 [2024-11-04 14:39:20.595778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:21.527 [2024-11-04 14:39:20.595897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:21.527 [2024-11-04 14:39:20.596242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:21.527 [2024-11-04 14:39:20.596281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:21.527 [2024-11-04 14:39:20.596606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:21.527 [2024-11-04 14:39:20.596839] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:21.527 [2024-11-04 14:39:20.596866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:21.527 [2024-11-04 14:39:20.597134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.527 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.786 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.786 "name": "raid_bdev1", 00:13:21.786 "uuid": "62d6bf31-5398-4550-b615-deae27751cd0", 00:13:21.786 "strip_size_kb": 64, 00:13:21.786 "state": "online", 00:13:21.786 "raid_level": "raid0", 00:13:21.786 "superblock": true, 00:13:21.786 "num_base_bdevs": 4, 00:13:21.786 "num_base_bdevs_discovered": 4, 00:13:21.786 "num_base_bdevs_operational": 4, 00:13:21.786 "base_bdevs_list": [ 00:13:21.786 { 00:13:21.786 "name": "BaseBdev1", 00:13:21.786 "uuid": "8ad12be7-bf32-52e2-9346-e4780fe944f8", 00:13:21.786 "is_configured": true, 00:13:21.786 "data_offset": 2048, 00:13:21.786 "data_size": 63488 00:13:21.786 }, 00:13:21.786 { 00:13:21.786 "name": "BaseBdev2", 00:13:21.786 "uuid": "db5d6350-6262-5d30-9fb9-d0a08843cc6d", 00:13:21.786 "is_configured": true, 00:13:21.786 "data_offset": 2048, 00:13:21.786 "data_size": 63488 00:13:21.786 }, 00:13:21.786 { 00:13:21.786 "name": "BaseBdev3", 00:13:21.786 "uuid": "e9104dd2-87ae-54f3-9293-6cc22e00c0ec", 00:13:21.786 "is_configured": true, 00:13:21.786 "data_offset": 2048, 00:13:21.786 "data_size": 63488 00:13:21.786 }, 00:13:21.786 { 00:13:21.786 "name": "BaseBdev4", 00:13:21.786 "uuid": "5d89c5f7-f927-5a87-bb19-c8bd653b55e1", 00:13:21.786 "is_configured": true, 00:13:21.786 "data_offset": 2048, 00:13:21.786 "data_size": 63488 00:13:21.786 } 00:13:21.786 ] 00:13:21.786 }' 00:13:21.786 14:39:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.786 14:39:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.045 14:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:22.045 14:39:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:22.303 [2024-11-04 14:39:21.254863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.238 "name": "raid_bdev1", 00:13:23.238 "uuid": "62d6bf31-5398-4550-b615-deae27751cd0", 00:13:23.238 "strip_size_kb": 64, 00:13:23.238 "state": "online", 00:13:23.238 "raid_level": "raid0", 00:13:23.238 "superblock": true, 00:13:23.238 "num_base_bdevs": 4, 00:13:23.238 "num_base_bdevs_discovered": 4, 00:13:23.238 "num_base_bdevs_operational": 4, 00:13:23.238 "base_bdevs_list": [ 00:13:23.238 { 00:13:23.238 "name": "BaseBdev1", 00:13:23.238 "uuid": "8ad12be7-bf32-52e2-9346-e4780fe944f8", 00:13:23.238 "is_configured": true, 00:13:23.238 "data_offset": 2048, 00:13:23.238 "data_size": 63488 00:13:23.238 }, 00:13:23.238 { 00:13:23.238 "name": "BaseBdev2", 00:13:23.238 "uuid": "db5d6350-6262-5d30-9fb9-d0a08843cc6d", 00:13:23.238 "is_configured": true, 00:13:23.238 "data_offset": 2048, 00:13:23.238 "data_size": 63488 00:13:23.238 }, 00:13:23.238 { 00:13:23.238 "name": "BaseBdev3", 00:13:23.238 "uuid": "e9104dd2-87ae-54f3-9293-6cc22e00c0ec", 00:13:23.238 "is_configured": true, 00:13:23.238 "data_offset": 2048, 00:13:23.238 "data_size": 63488 00:13:23.238 }, 00:13:23.238 { 00:13:23.238 "name": "BaseBdev4", 00:13:23.238 "uuid": "5d89c5f7-f927-5a87-bb19-c8bd653b55e1", 00:13:23.238 "is_configured": true, 00:13:23.238 "data_offset": 2048, 00:13:23.238 "data_size": 63488 00:13:23.238 } 00:13:23.238 ] 00:13:23.238 }' 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.238 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.806 [2024-11-04 14:39:22.689408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:23.806 [2024-11-04 14:39:22.689450] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.806 [2024-11-04 14:39:22.692976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.806 [2024-11-04 14:39:22.693070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.806 [2024-11-04 14:39:22.693133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.806 [2024-11-04 14:39:22.693152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:23.806 { 00:13:23.806 "results": [ 00:13:23.806 { 00:13:23.806 "job": "raid_bdev1", 00:13:23.806 "core_mask": "0x1", 00:13:23.806 "workload": "randrw", 00:13:23.806 "percentage": 50, 00:13:23.806 "status": "finished", 00:13:23.806 "queue_depth": 1, 00:13:23.806 "io_size": 131072, 00:13:23.806 "runtime": 1.431971, 00:13:23.806 "iops": 10416.412064210797, 00:13:23.806 "mibps": 1302.0515080263497, 00:13:23.806 "io_failed": 1, 00:13:23.806 "io_timeout": 0, 00:13:23.806 "avg_latency_us": 134.23061473486624, 00:13:23.806 "min_latency_us": 39.56363636363636, 00:13:23.806 "max_latency_us": 2115.0254545454545 00:13:23.806 } 00:13:23.806 ], 00:13:23.806 "core_count": 1 00:13:23.806 } 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71238 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71238 ']' 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71238 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71238 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:23.806 killing process with pid 71238 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71238' 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71238 00:13:23.806 [2024-11-04 14:39:22.726484] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.806 14:39:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71238 00:13:24.064 [2024-11-04 14:39:23.013336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.014 14:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZB2vSkxpih 00:13:25.014 14:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:25.014 14:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:25.014 14:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:25.014 14:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:13:25.014 14:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:25.014 14:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:25.014 14:39:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:25.014 00:13:25.014 real 0m4.884s 00:13:25.014 user 0m6.086s 00:13:25.014 sys 0m0.579s 00:13:25.014 14:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:25.014 14:39:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.014 ************************************ 00:13:25.014 END TEST raid_write_error_test 00:13:25.014 ************************************ 00:13:25.274 14:39:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:25.274 14:39:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:13:25.274 14:39:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:25.274 14:39:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:25.274 14:39:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.274 ************************************ 00:13:25.274 START TEST raid_state_function_test 00:13:25.274 ************************************ 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71382 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71382' 00:13:25.274 Process raid pid: 71382 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71382 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71382 ']' 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:25.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:25.274 14:39:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.274 [2024-11-04 14:39:24.279600] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:13:25.274 [2024-11-04 14:39:24.279796] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.533 [2024-11-04 14:39:24.473743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.533 [2024-11-04 14:39:24.606683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.792 [2024-11-04 14:39:24.815622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.792 [2024-11-04 14:39:24.815675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.359 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:26.359 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:26.359 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.359 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.359 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.359 [2024-11-04 14:39:25.281594] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.359 [2024-11-04 14:39:25.281663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.359 [2024-11-04 14:39:25.281680] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.359 [2024-11-04 14:39:25.281697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.359 [2024-11-04 14:39:25.281707] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.359 [2024-11-04 14:39:25.281723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.359 [2024-11-04 14:39:25.281733] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:26.359 [2024-11-04 14:39:25.281747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:26.359 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.359 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.360 "name": "Existed_Raid", 00:13:26.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.360 "strip_size_kb": 64, 00:13:26.360 "state": "configuring", 00:13:26.360 "raid_level": "concat", 00:13:26.360 "superblock": false, 00:13:26.360 "num_base_bdevs": 4, 00:13:26.360 "num_base_bdevs_discovered": 0, 00:13:26.360 "num_base_bdevs_operational": 4, 00:13:26.360 "base_bdevs_list": [ 00:13:26.360 { 00:13:26.360 "name": "BaseBdev1", 00:13:26.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.360 "is_configured": false, 00:13:26.360 "data_offset": 0, 00:13:26.360 "data_size": 0 00:13:26.360 }, 00:13:26.360 { 00:13:26.360 "name": "BaseBdev2", 00:13:26.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.360 "is_configured": false, 00:13:26.360 "data_offset": 0, 00:13:26.360 "data_size": 0 00:13:26.360 }, 00:13:26.360 { 00:13:26.360 "name": "BaseBdev3", 00:13:26.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.360 "is_configured": false, 00:13:26.360 "data_offset": 0, 00:13:26.360 "data_size": 0 00:13:26.360 }, 00:13:26.360 { 00:13:26.360 "name": "BaseBdev4", 00:13:26.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.360 "is_configured": false, 00:13:26.360 "data_offset": 0, 00:13:26.360 "data_size": 0 00:13:26.360 } 00:13:26.360 ] 00:13:26.360 }' 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.360 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.926 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.926 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.926 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.926 [2024-11-04 14:39:25.773650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.926 [2024-11-04 14:39:25.773717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:26.926 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.926 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.927 [2024-11-04 14:39:25.781615] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.927 [2024-11-04 14:39:25.781685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.927 [2024-11-04 14:39:25.781715] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.927 [2024-11-04 14:39:25.781731] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.927 [2024-11-04 14:39:25.781741] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.927 [2024-11-04 14:39:25.781754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.927 [2024-11-04 14:39:25.781763] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:26.927 [2024-11-04 14:39:25.781777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.927 [2024-11-04 14:39:25.829193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.927 BaseBdev1 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.927 [ 00:13:26.927 { 00:13:26.927 "name": "BaseBdev1", 00:13:26.927 "aliases": [ 00:13:26.927 "590a941c-1ab9-412d-ae25-02e79e1d717e" 00:13:26.927 ], 00:13:26.927 "product_name": "Malloc disk", 00:13:26.927 "block_size": 512, 00:13:26.927 "num_blocks": 65536, 00:13:26.927 "uuid": "590a941c-1ab9-412d-ae25-02e79e1d717e", 00:13:26.927 "assigned_rate_limits": { 00:13:26.927 "rw_ios_per_sec": 0, 00:13:26.927 "rw_mbytes_per_sec": 0, 00:13:26.927 "r_mbytes_per_sec": 0, 00:13:26.927 "w_mbytes_per_sec": 0 00:13:26.927 }, 00:13:26.927 "claimed": true, 00:13:26.927 "claim_type": "exclusive_write", 00:13:26.927 "zoned": false, 00:13:26.927 "supported_io_types": { 00:13:26.927 "read": true, 00:13:26.927 "write": true, 00:13:26.927 "unmap": true, 00:13:26.927 "flush": true, 00:13:26.927 "reset": true, 00:13:26.927 "nvme_admin": false, 00:13:26.927 "nvme_io": false, 00:13:26.927 "nvme_io_md": false, 00:13:26.927 "write_zeroes": true, 00:13:26.927 "zcopy": true, 00:13:26.927 "get_zone_info": false, 00:13:26.927 "zone_management": false, 00:13:26.927 "zone_append": false, 00:13:26.927 "compare": false, 00:13:26.927 "compare_and_write": false, 00:13:26.927 "abort": true, 00:13:26.927 "seek_hole": false, 00:13:26.927 "seek_data": false, 00:13:26.927 "copy": true, 00:13:26.927 "nvme_iov_md": false 00:13:26.927 }, 00:13:26.927 "memory_domains": [ 00:13:26.927 { 00:13:26.927 "dma_device_id": "system", 00:13:26.927 "dma_device_type": 1 00:13:26.927 }, 00:13:26.927 { 00:13:26.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.927 "dma_device_type": 2 00:13:26.927 } 00:13:26.927 ], 00:13:26.927 "driver_specific": {} 00:13:26.927 } 00:13:26.927 ] 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.927 "name": "Existed_Raid", 00:13:26.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.927 "strip_size_kb": 64, 00:13:26.927 "state": "configuring", 00:13:26.927 "raid_level": "concat", 00:13:26.927 "superblock": false, 00:13:26.927 "num_base_bdevs": 4, 00:13:26.927 "num_base_bdevs_discovered": 1, 00:13:26.927 "num_base_bdevs_operational": 4, 00:13:26.927 "base_bdevs_list": [ 00:13:26.927 { 00:13:26.927 "name": "BaseBdev1", 00:13:26.927 "uuid": "590a941c-1ab9-412d-ae25-02e79e1d717e", 00:13:26.927 "is_configured": true, 00:13:26.927 "data_offset": 0, 00:13:26.927 "data_size": 65536 00:13:26.927 }, 00:13:26.927 { 00:13:26.927 "name": "BaseBdev2", 00:13:26.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.927 "is_configured": false, 00:13:26.927 "data_offset": 0, 00:13:26.927 "data_size": 0 00:13:26.927 }, 00:13:26.927 { 00:13:26.927 "name": "BaseBdev3", 00:13:26.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.927 "is_configured": false, 00:13:26.927 "data_offset": 0, 00:13:26.927 "data_size": 0 00:13:26.927 }, 00:13:26.927 { 00:13:26.927 "name": "BaseBdev4", 00:13:26.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.927 "is_configured": false, 00:13:26.927 "data_offset": 0, 00:13:26.927 "data_size": 0 00:13:26.927 } 00:13:26.927 ] 00:13:26.927 }' 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.927 14:39:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.495 [2024-11-04 14:39:26.381396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:27.495 [2024-11-04 14:39:26.381461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.495 [2024-11-04 14:39:26.389490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.495 [2024-11-04 14:39:26.392174] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:27.495 [2024-11-04 14:39:26.392226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:27.495 [2024-11-04 14:39:26.392244] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:27.495 [2024-11-04 14:39:26.392262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:27.495 [2024-11-04 14:39:26.392272] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:27.495 [2024-11-04 14:39:26.392286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.495 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.495 "name": "Existed_Raid", 00:13:27.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.495 "strip_size_kb": 64, 00:13:27.495 "state": "configuring", 00:13:27.495 "raid_level": "concat", 00:13:27.495 "superblock": false, 00:13:27.495 "num_base_bdevs": 4, 00:13:27.495 "num_base_bdevs_discovered": 1, 00:13:27.495 "num_base_bdevs_operational": 4, 00:13:27.495 "base_bdevs_list": [ 00:13:27.495 { 00:13:27.495 "name": "BaseBdev1", 00:13:27.495 "uuid": "590a941c-1ab9-412d-ae25-02e79e1d717e", 00:13:27.495 "is_configured": true, 00:13:27.495 "data_offset": 0, 00:13:27.495 "data_size": 65536 00:13:27.495 }, 00:13:27.495 { 00:13:27.495 "name": "BaseBdev2", 00:13:27.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.495 "is_configured": false, 00:13:27.495 "data_offset": 0, 00:13:27.495 "data_size": 0 00:13:27.495 }, 00:13:27.495 { 00:13:27.495 "name": "BaseBdev3", 00:13:27.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.495 "is_configured": false, 00:13:27.495 "data_offset": 0, 00:13:27.495 "data_size": 0 00:13:27.495 }, 00:13:27.495 { 00:13:27.495 "name": "BaseBdev4", 00:13:27.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.496 "is_configured": false, 00:13:27.496 "data_offset": 0, 00:13:27.496 "data_size": 0 00:13:27.496 } 00:13:27.496 ] 00:13:27.496 }' 00:13:27.496 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.496 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.063 [2024-11-04 14:39:26.939458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.063 BaseBdev2 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.063 [ 00:13:28.063 { 00:13:28.063 "name": "BaseBdev2", 00:13:28.063 "aliases": [ 00:13:28.063 "6d05c76b-ce2c-4328-8c65-21b4f9b09b59" 00:13:28.063 ], 00:13:28.063 "product_name": "Malloc disk", 00:13:28.063 "block_size": 512, 00:13:28.063 "num_blocks": 65536, 00:13:28.063 "uuid": "6d05c76b-ce2c-4328-8c65-21b4f9b09b59", 00:13:28.063 "assigned_rate_limits": { 00:13:28.063 "rw_ios_per_sec": 0, 00:13:28.063 "rw_mbytes_per_sec": 0, 00:13:28.063 "r_mbytes_per_sec": 0, 00:13:28.063 "w_mbytes_per_sec": 0 00:13:28.063 }, 00:13:28.063 "claimed": true, 00:13:28.063 "claim_type": "exclusive_write", 00:13:28.063 "zoned": false, 00:13:28.063 "supported_io_types": { 00:13:28.063 "read": true, 00:13:28.063 "write": true, 00:13:28.063 "unmap": true, 00:13:28.063 "flush": true, 00:13:28.063 "reset": true, 00:13:28.063 "nvme_admin": false, 00:13:28.063 "nvme_io": false, 00:13:28.063 "nvme_io_md": false, 00:13:28.063 "write_zeroes": true, 00:13:28.063 "zcopy": true, 00:13:28.063 "get_zone_info": false, 00:13:28.063 "zone_management": false, 00:13:28.063 "zone_append": false, 00:13:28.063 "compare": false, 00:13:28.063 "compare_and_write": false, 00:13:28.063 "abort": true, 00:13:28.063 "seek_hole": false, 00:13:28.063 "seek_data": false, 00:13:28.063 "copy": true, 00:13:28.063 "nvme_iov_md": false 00:13:28.063 }, 00:13:28.063 "memory_domains": [ 00:13:28.063 { 00:13:28.063 "dma_device_id": "system", 00:13:28.063 "dma_device_type": 1 00:13:28.063 }, 00:13:28.063 { 00:13:28.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.063 "dma_device_type": 2 00:13:28.063 } 00:13:28.063 ], 00:13:28.063 "driver_specific": {} 00:13:28.063 } 00:13:28.063 ] 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.063 14:39:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.063 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.063 "name": "Existed_Raid", 00:13:28.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.063 "strip_size_kb": 64, 00:13:28.063 "state": "configuring", 00:13:28.063 "raid_level": "concat", 00:13:28.063 "superblock": false, 00:13:28.063 "num_base_bdevs": 4, 00:13:28.063 "num_base_bdevs_discovered": 2, 00:13:28.063 "num_base_bdevs_operational": 4, 00:13:28.063 "base_bdevs_list": [ 00:13:28.063 { 00:13:28.063 "name": "BaseBdev1", 00:13:28.063 "uuid": "590a941c-1ab9-412d-ae25-02e79e1d717e", 00:13:28.063 "is_configured": true, 00:13:28.063 "data_offset": 0, 00:13:28.063 "data_size": 65536 00:13:28.063 }, 00:13:28.063 { 00:13:28.063 "name": "BaseBdev2", 00:13:28.063 "uuid": "6d05c76b-ce2c-4328-8c65-21b4f9b09b59", 00:13:28.063 "is_configured": true, 00:13:28.064 "data_offset": 0, 00:13:28.064 "data_size": 65536 00:13:28.064 }, 00:13:28.064 { 00:13:28.064 "name": "BaseBdev3", 00:13:28.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.064 "is_configured": false, 00:13:28.064 "data_offset": 0, 00:13:28.064 "data_size": 0 00:13:28.064 }, 00:13:28.064 { 00:13:28.064 "name": "BaseBdev4", 00:13:28.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.064 "is_configured": false, 00:13:28.064 "data_offset": 0, 00:13:28.064 "data_size": 0 00:13:28.064 } 00:13:28.064 ] 00:13:28.064 }' 00:13:28.064 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.064 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.662 [2024-11-04 14:39:27.515239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.662 BaseBdev3 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.662 [ 00:13:28.662 { 00:13:28.662 "name": "BaseBdev3", 00:13:28.662 "aliases": [ 00:13:28.662 "1e54e256-59c3-4d98-8e4a-bfd32b7b3dd3" 00:13:28.662 ], 00:13:28.662 "product_name": "Malloc disk", 00:13:28.662 "block_size": 512, 00:13:28.662 "num_blocks": 65536, 00:13:28.662 "uuid": "1e54e256-59c3-4d98-8e4a-bfd32b7b3dd3", 00:13:28.662 "assigned_rate_limits": { 00:13:28.662 "rw_ios_per_sec": 0, 00:13:28.662 "rw_mbytes_per_sec": 0, 00:13:28.662 "r_mbytes_per_sec": 0, 00:13:28.662 "w_mbytes_per_sec": 0 00:13:28.662 }, 00:13:28.662 "claimed": true, 00:13:28.662 "claim_type": "exclusive_write", 00:13:28.662 "zoned": false, 00:13:28.662 "supported_io_types": { 00:13:28.662 "read": true, 00:13:28.662 "write": true, 00:13:28.662 "unmap": true, 00:13:28.662 "flush": true, 00:13:28.662 "reset": true, 00:13:28.662 "nvme_admin": false, 00:13:28.662 "nvme_io": false, 00:13:28.662 "nvme_io_md": false, 00:13:28.662 "write_zeroes": true, 00:13:28.662 "zcopy": true, 00:13:28.662 "get_zone_info": false, 00:13:28.662 "zone_management": false, 00:13:28.662 "zone_append": false, 00:13:28.662 "compare": false, 00:13:28.662 "compare_and_write": false, 00:13:28.662 "abort": true, 00:13:28.662 "seek_hole": false, 00:13:28.662 "seek_data": false, 00:13:28.662 "copy": true, 00:13:28.662 "nvme_iov_md": false 00:13:28.662 }, 00:13:28.662 "memory_domains": [ 00:13:28.662 { 00:13:28.662 "dma_device_id": "system", 00:13:28.662 "dma_device_type": 1 00:13:28.662 }, 00:13:28.662 { 00:13:28.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.662 "dma_device_type": 2 00:13:28.662 } 00:13:28.662 ], 00:13:28.662 "driver_specific": {} 00:13:28.662 } 00:13:28.662 ] 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.662 "name": "Existed_Raid", 00:13:28.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.662 "strip_size_kb": 64, 00:13:28.662 "state": "configuring", 00:13:28.662 "raid_level": "concat", 00:13:28.662 "superblock": false, 00:13:28.662 "num_base_bdevs": 4, 00:13:28.662 "num_base_bdevs_discovered": 3, 00:13:28.662 "num_base_bdevs_operational": 4, 00:13:28.662 "base_bdevs_list": [ 00:13:28.662 { 00:13:28.662 "name": "BaseBdev1", 00:13:28.662 "uuid": "590a941c-1ab9-412d-ae25-02e79e1d717e", 00:13:28.662 "is_configured": true, 00:13:28.662 "data_offset": 0, 00:13:28.662 "data_size": 65536 00:13:28.662 }, 00:13:28.662 { 00:13:28.662 "name": "BaseBdev2", 00:13:28.662 "uuid": "6d05c76b-ce2c-4328-8c65-21b4f9b09b59", 00:13:28.662 "is_configured": true, 00:13:28.662 "data_offset": 0, 00:13:28.662 "data_size": 65536 00:13:28.662 }, 00:13:28.662 { 00:13:28.662 "name": "BaseBdev3", 00:13:28.662 "uuid": "1e54e256-59c3-4d98-8e4a-bfd32b7b3dd3", 00:13:28.662 "is_configured": true, 00:13:28.662 "data_offset": 0, 00:13:28.662 "data_size": 65536 00:13:28.662 }, 00:13:28.662 { 00:13:28.662 "name": "BaseBdev4", 00:13:28.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.662 "is_configured": false, 00:13:28.662 "data_offset": 0, 00:13:28.662 "data_size": 0 00:13:28.662 } 00:13:28.662 ] 00:13:28.662 }' 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.662 14:39:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.231 [2024-11-04 14:39:28.110603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:29.231 [2024-11-04 14:39:28.110910] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:29.231 [2024-11-04 14:39:28.110968] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:29.231 [2024-11-04 14:39:28.111319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:29.231 [2024-11-04 14:39:28.111632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:29.231 [2024-11-04 14:39:28.111660] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:29.231 [2024-11-04 14:39:28.112046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.231 BaseBdev4 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.231 [ 00:13:29.231 { 00:13:29.231 "name": "BaseBdev4", 00:13:29.231 "aliases": [ 00:13:29.231 "225a4431-d0a1-45fc-87fa-285fca87f210" 00:13:29.231 ], 00:13:29.231 "product_name": "Malloc disk", 00:13:29.231 "block_size": 512, 00:13:29.231 "num_blocks": 65536, 00:13:29.231 "uuid": "225a4431-d0a1-45fc-87fa-285fca87f210", 00:13:29.231 "assigned_rate_limits": { 00:13:29.231 "rw_ios_per_sec": 0, 00:13:29.231 "rw_mbytes_per_sec": 0, 00:13:29.231 "r_mbytes_per_sec": 0, 00:13:29.231 "w_mbytes_per_sec": 0 00:13:29.231 }, 00:13:29.231 "claimed": true, 00:13:29.231 "claim_type": "exclusive_write", 00:13:29.231 "zoned": false, 00:13:29.231 "supported_io_types": { 00:13:29.231 "read": true, 00:13:29.231 "write": true, 00:13:29.231 "unmap": true, 00:13:29.231 "flush": true, 00:13:29.231 "reset": true, 00:13:29.231 "nvme_admin": false, 00:13:29.231 "nvme_io": false, 00:13:29.231 "nvme_io_md": false, 00:13:29.231 "write_zeroes": true, 00:13:29.231 "zcopy": true, 00:13:29.231 "get_zone_info": false, 00:13:29.231 "zone_management": false, 00:13:29.231 "zone_append": false, 00:13:29.231 "compare": false, 00:13:29.231 "compare_and_write": false, 00:13:29.231 "abort": true, 00:13:29.231 "seek_hole": false, 00:13:29.231 "seek_data": false, 00:13:29.231 "copy": true, 00:13:29.231 "nvme_iov_md": false 00:13:29.231 }, 00:13:29.231 "memory_domains": [ 00:13:29.231 { 00:13:29.231 "dma_device_id": "system", 00:13:29.231 "dma_device_type": 1 00:13:29.231 }, 00:13:29.231 { 00:13:29.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.231 "dma_device_type": 2 00:13:29.231 } 00:13:29.231 ], 00:13:29.231 "driver_specific": {} 00:13:29.231 } 00:13:29.231 ] 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.231 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.232 "name": "Existed_Raid", 00:13:29.232 "uuid": "7c0c0d27-3df3-4a65-ac59-add23300ecd5", 00:13:29.232 "strip_size_kb": 64, 00:13:29.232 "state": "online", 00:13:29.232 "raid_level": "concat", 00:13:29.232 "superblock": false, 00:13:29.232 "num_base_bdevs": 4, 00:13:29.232 "num_base_bdevs_discovered": 4, 00:13:29.232 "num_base_bdevs_operational": 4, 00:13:29.232 "base_bdevs_list": [ 00:13:29.232 { 00:13:29.232 "name": "BaseBdev1", 00:13:29.232 "uuid": "590a941c-1ab9-412d-ae25-02e79e1d717e", 00:13:29.232 "is_configured": true, 00:13:29.232 "data_offset": 0, 00:13:29.232 "data_size": 65536 00:13:29.232 }, 00:13:29.232 { 00:13:29.232 "name": "BaseBdev2", 00:13:29.232 "uuid": "6d05c76b-ce2c-4328-8c65-21b4f9b09b59", 00:13:29.232 "is_configured": true, 00:13:29.232 "data_offset": 0, 00:13:29.232 "data_size": 65536 00:13:29.232 }, 00:13:29.232 { 00:13:29.232 "name": "BaseBdev3", 00:13:29.232 "uuid": "1e54e256-59c3-4d98-8e4a-bfd32b7b3dd3", 00:13:29.232 "is_configured": true, 00:13:29.232 "data_offset": 0, 00:13:29.232 "data_size": 65536 00:13:29.232 }, 00:13:29.232 { 00:13:29.232 "name": "BaseBdev4", 00:13:29.232 "uuid": "225a4431-d0a1-45fc-87fa-285fca87f210", 00:13:29.232 "is_configured": true, 00:13:29.232 "data_offset": 0, 00:13:29.232 "data_size": 65536 00:13:29.232 } 00:13:29.232 ] 00:13:29.232 }' 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.232 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:29.800 [2024-11-04 14:39:28.679366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:29.800 "name": "Existed_Raid", 00:13:29.800 "aliases": [ 00:13:29.800 "7c0c0d27-3df3-4a65-ac59-add23300ecd5" 00:13:29.800 ], 00:13:29.800 "product_name": "Raid Volume", 00:13:29.800 "block_size": 512, 00:13:29.800 "num_blocks": 262144, 00:13:29.800 "uuid": "7c0c0d27-3df3-4a65-ac59-add23300ecd5", 00:13:29.800 "assigned_rate_limits": { 00:13:29.800 "rw_ios_per_sec": 0, 00:13:29.800 "rw_mbytes_per_sec": 0, 00:13:29.800 "r_mbytes_per_sec": 0, 00:13:29.800 "w_mbytes_per_sec": 0 00:13:29.800 }, 00:13:29.800 "claimed": false, 00:13:29.800 "zoned": false, 00:13:29.800 "supported_io_types": { 00:13:29.800 "read": true, 00:13:29.800 "write": true, 00:13:29.800 "unmap": true, 00:13:29.800 "flush": true, 00:13:29.800 "reset": true, 00:13:29.800 "nvme_admin": false, 00:13:29.800 "nvme_io": false, 00:13:29.800 "nvme_io_md": false, 00:13:29.800 "write_zeroes": true, 00:13:29.800 "zcopy": false, 00:13:29.800 "get_zone_info": false, 00:13:29.800 "zone_management": false, 00:13:29.800 "zone_append": false, 00:13:29.800 "compare": false, 00:13:29.800 "compare_and_write": false, 00:13:29.800 "abort": false, 00:13:29.800 "seek_hole": false, 00:13:29.800 "seek_data": false, 00:13:29.800 "copy": false, 00:13:29.800 "nvme_iov_md": false 00:13:29.800 }, 00:13:29.800 "memory_domains": [ 00:13:29.800 { 00:13:29.800 "dma_device_id": "system", 00:13:29.800 "dma_device_type": 1 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.800 "dma_device_type": 2 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "dma_device_id": "system", 00:13:29.800 "dma_device_type": 1 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.800 "dma_device_type": 2 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "dma_device_id": "system", 00:13:29.800 "dma_device_type": 1 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.800 "dma_device_type": 2 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "dma_device_id": "system", 00:13:29.800 "dma_device_type": 1 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.800 "dma_device_type": 2 00:13:29.800 } 00:13:29.800 ], 00:13:29.800 "driver_specific": { 00:13:29.800 "raid": { 00:13:29.800 "uuid": "7c0c0d27-3df3-4a65-ac59-add23300ecd5", 00:13:29.800 "strip_size_kb": 64, 00:13:29.800 "state": "online", 00:13:29.800 "raid_level": "concat", 00:13:29.800 "superblock": false, 00:13:29.800 "num_base_bdevs": 4, 00:13:29.800 "num_base_bdevs_discovered": 4, 00:13:29.800 "num_base_bdevs_operational": 4, 00:13:29.800 "base_bdevs_list": [ 00:13:29.800 { 00:13:29.800 "name": "BaseBdev1", 00:13:29.800 "uuid": "590a941c-1ab9-412d-ae25-02e79e1d717e", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 0, 00:13:29.800 "data_size": 65536 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "name": "BaseBdev2", 00:13:29.800 "uuid": "6d05c76b-ce2c-4328-8c65-21b4f9b09b59", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 0, 00:13:29.800 "data_size": 65536 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "name": "BaseBdev3", 00:13:29.800 "uuid": "1e54e256-59c3-4d98-8e4a-bfd32b7b3dd3", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 0, 00:13:29.800 "data_size": 65536 00:13:29.800 }, 00:13:29.800 { 00:13:29.800 "name": "BaseBdev4", 00:13:29.800 "uuid": "225a4431-d0a1-45fc-87fa-285fca87f210", 00:13:29.800 "is_configured": true, 00:13:29.800 "data_offset": 0, 00:13:29.800 "data_size": 65536 00:13:29.800 } 00:13:29.800 ] 00:13:29.800 } 00:13:29.800 } 00:13:29.800 }' 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:29.800 BaseBdev2 00:13:29.800 BaseBdev3 00:13:29.800 BaseBdev4' 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.800 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.801 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.801 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.801 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.801 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.801 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.801 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:29.801 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.801 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.801 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.801 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.060 14:39:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.060 [2024-11-04 14:39:29.059037] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:30.060 [2024-11-04 14:39:29.059090] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.060 [2024-11-04 14:39:29.059156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.060 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.341 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.341 "name": "Existed_Raid", 00:13:30.341 "uuid": "7c0c0d27-3df3-4a65-ac59-add23300ecd5", 00:13:30.341 "strip_size_kb": 64, 00:13:30.341 "state": "offline", 00:13:30.341 "raid_level": "concat", 00:13:30.341 "superblock": false, 00:13:30.341 "num_base_bdevs": 4, 00:13:30.341 "num_base_bdevs_discovered": 3, 00:13:30.341 "num_base_bdevs_operational": 3, 00:13:30.341 "base_bdevs_list": [ 00:13:30.341 { 00:13:30.341 "name": null, 00:13:30.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.341 "is_configured": false, 00:13:30.341 "data_offset": 0, 00:13:30.341 "data_size": 65536 00:13:30.341 }, 00:13:30.341 { 00:13:30.341 "name": "BaseBdev2", 00:13:30.341 "uuid": "6d05c76b-ce2c-4328-8c65-21b4f9b09b59", 00:13:30.341 "is_configured": true, 00:13:30.341 "data_offset": 0, 00:13:30.341 "data_size": 65536 00:13:30.341 }, 00:13:30.341 { 00:13:30.341 "name": "BaseBdev3", 00:13:30.341 "uuid": "1e54e256-59c3-4d98-8e4a-bfd32b7b3dd3", 00:13:30.341 "is_configured": true, 00:13:30.341 "data_offset": 0, 00:13:30.341 "data_size": 65536 00:13:30.341 }, 00:13:30.341 { 00:13:30.341 "name": "BaseBdev4", 00:13:30.341 "uuid": "225a4431-d0a1-45fc-87fa-285fca87f210", 00:13:30.341 "is_configured": true, 00:13:30.341 "data_offset": 0, 00:13:30.341 "data_size": 65536 00:13:30.341 } 00:13:30.341 ] 00:13:30.341 }' 00:13:30.341 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.341 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.610 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:30.610 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.610 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.610 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.610 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.610 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.610 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.611 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.611 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.611 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:30.611 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.611 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.611 [2024-11-04 14:39:29.722908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.881 [2024-11-04 14:39:29.867926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.881 14:39:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.148 [2024-11-04 14:39:30.013738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:31.148 [2024-11-04 14:39:30.013815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.148 BaseBdev2 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:31.148 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.149 [ 00:13:31.149 { 00:13:31.149 "name": "BaseBdev2", 00:13:31.149 "aliases": [ 00:13:31.149 "6d0e4179-5b06-44a2-bec3-14903518370b" 00:13:31.149 ], 00:13:31.149 "product_name": "Malloc disk", 00:13:31.149 "block_size": 512, 00:13:31.149 "num_blocks": 65536, 00:13:31.149 "uuid": "6d0e4179-5b06-44a2-bec3-14903518370b", 00:13:31.149 "assigned_rate_limits": { 00:13:31.149 "rw_ios_per_sec": 0, 00:13:31.149 "rw_mbytes_per_sec": 0, 00:13:31.149 "r_mbytes_per_sec": 0, 00:13:31.149 "w_mbytes_per_sec": 0 00:13:31.149 }, 00:13:31.149 "claimed": false, 00:13:31.149 "zoned": false, 00:13:31.149 "supported_io_types": { 00:13:31.149 "read": true, 00:13:31.149 "write": true, 00:13:31.149 "unmap": true, 00:13:31.149 "flush": true, 00:13:31.149 "reset": true, 00:13:31.149 "nvme_admin": false, 00:13:31.149 "nvme_io": false, 00:13:31.149 "nvme_io_md": false, 00:13:31.149 "write_zeroes": true, 00:13:31.149 "zcopy": true, 00:13:31.149 "get_zone_info": false, 00:13:31.149 "zone_management": false, 00:13:31.149 "zone_append": false, 00:13:31.149 "compare": false, 00:13:31.149 "compare_and_write": false, 00:13:31.149 "abort": true, 00:13:31.149 "seek_hole": false, 00:13:31.149 "seek_data": false, 00:13:31.149 "copy": true, 00:13:31.149 "nvme_iov_md": false 00:13:31.149 }, 00:13:31.149 "memory_domains": [ 00:13:31.149 { 00:13:31.149 "dma_device_id": "system", 00:13:31.149 "dma_device_type": 1 00:13:31.149 }, 00:13:31.149 { 00:13:31.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.149 "dma_device_type": 2 00:13:31.149 } 00:13:31.149 ], 00:13:31.149 "driver_specific": {} 00:13:31.149 } 00:13:31.149 ] 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.149 BaseBdev3 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.149 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.419 [ 00:13:31.419 { 00:13:31.419 "name": "BaseBdev3", 00:13:31.419 "aliases": [ 00:13:31.419 "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9" 00:13:31.419 ], 00:13:31.419 "product_name": "Malloc disk", 00:13:31.419 "block_size": 512, 00:13:31.419 "num_blocks": 65536, 00:13:31.419 "uuid": "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9", 00:13:31.419 "assigned_rate_limits": { 00:13:31.419 "rw_ios_per_sec": 0, 00:13:31.419 "rw_mbytes_per_sec": 0, 00:13:31.419 "r_mbytes_per_sec": 0, 00:13:31.419 "w_mbytes_per_sec": 0 00:13:31.419 }, 00:13:31.419 "claimed": false, 00:13:31.419 "zoned": false, 00:13:31.419 "supported_io_types": { 00:13:31.419 "read": true, 00:13:31.419 "write": true, 00:13:31.419 "unmap": true, 00:13:31.419 "flush": true, 00:13:31.419 "reset": true, 00:13:31.419 "nvme_admin": false, 00:13:31.419 "nvme_io": false, 00:13:31.419 "nvme_io_md": false, 00:13:31.419 "write_zeroes": true, 00:13:31.419 "zcopy": true, 00:13:31.419 "get_zone_info": false, 00:13:31.419 "zone_management": false, 00:13:31.419 "zone_append": false, 00:13:31.419 "compare": false, 00:13:31.419 "compare_and_write": false, 00:13:31.419 "abort": true, 00:13:31.419 "seek_hole": false, 00:13:31.419 "seek_data": false, 00:13:31.419 "copy": true, 00:13:31.419 "nvme_iov_md": false 00:13:31.419 }, 00:13:31.419 "memory_domains": [ 00:13:31.419 { 00:13:31.419 "dma_device_id": "system", 00:13:31.419 "dma_device_type": 1 00:13:31.419 }, 00:13:31.419 { 00:13:31.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.419 "dma_device_type": 2 00:13:31.419 } 00:13:31.419 ], 00:13:31.419 "driver_specific": {} 00:13:31.419 } 00:13:31.419 ] 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.419 BaseBdev4 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.419 [ 00:13:31.419 { 00:13:31.419 "name": "BaseBdev4", 00:13:31.419 "aliases": [ 00:13:31.419 "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82" 00:13:31.419 ], 00:13:31.419 "product_name": "Malloc disk", 00:13:31.419 "block_size": 512, 00:13:31.419 "num_blocks": 65536, 00:13:31.419 "uuid": "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82", 00:13:31.419 "assigned_rate_limits": { 00:13:31.419 "rw_ios_per_sec": 0, 00:13:31.419 "rw_mbytes_per_sec": 0, 00:13:31.419 "r_mbytes_per_sec": 0, 00:13:31.419 "w_mbytes_per_sec": 0 00:13:31.419 }, 00:13:31.419 "claimed": false, 00:13:31.419 "zoned": false, 00:13:31.419 "supported_io_types": { 00:13:31.419 "read": true, 00:13:31.419 "write": true, 00:13:31.419 "unmap": true, 00:13:31.419 "flush": true, 00:13:31.419 "reset": true, 00:13:31.419 "nvme_admin": false, 00:13:31.419 "nvme_io": false, 00:13:31.419 "nvme_io_md": false, 00:13:31.419 "write_zeroes": true, 00:13:31.419 "zcopy": true, 00:13:31.419 "get_zone_info": false, 00:13:31.419 "zone_management": false, 00:13:31.419 "zone_append": false, 00:13:31.419 "compare": false, 00:13:31.419 "compare_and_write": false, 00:13:31.419 "abort": true, 00:13:31.419 "seek_hole": false, 00:13:31.419 "seek_data": false, 00:13:31.419 "copy": true, 00:13:31.419 "nvme_iov_md": false 00:13:31.419 }, 00:13:31.419 "memory_domains": [ 00:13:31.419 { 00:13:31.419 "dma_device_id": "system", 00:13:31.419 "dma_device_type": 1 00:13:31.419 }, 00:13:31.419 { 00:13:31.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.419 "dma_device_type": 2 00:13:31.419 } 00:13:31.419 ], 00:13:31.419 "driver_specific": {} 00:13:31.419 } 00:13:31.419 ] 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.419 [2024-11-04 14:39:30.362744] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:31.419 [2024-11-04 14:39:30.362797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:31.419 [2024-11-04 14:39:30.362828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.419 [2024-11-04 14:39:30.365321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:31.419 [2024-11-04 14:39:30.365412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.419 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.420 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.420 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.420 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.420 "name": "Existed_Raid", 00:13:31.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.420 "strip_size_kb": 64, 00:13:31.420 "state": "configuring", 00:13:31.420 "raid_level": "concat", 00:13:31.420 "superblock": false, 00:13:31.420 "num_base_bdevs": 4, 00:13:31.420 "num_base_bdevs_discovered": 3, 00:13:31.420 "num_base_bdevs_operational": 4, 00:13:31.420 "base_bdevs_list": [ 00:13:31.420 { 00:13:31.420 "name": "BaseBdev1", 00:13:31.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.420 "is_configured": false, 00:13:31.420 "data_offset": 0, 00:13:31.420 "data_size": 0 00:13:31.420 }, 00:13:31.420 { 00:13:31.420 "name": "BaseBdev2", 00:13:31.420 "uuid": "6d0e4179-5b06-44a2-bec3-14903518370b", 00:13:31.420 "is_configured": true, 00:13:31.420 "data_offset": 0, 00:13:31.420 "data_size": 65536 00:13:31.420 }, 00:13:31.420 { 00:13:31.420 "name": "BaseBdev3", 00:13:31.420 "uuid": "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9", 00:13:31.420 "is_configured": true, 00:13:31.420 "data_offset": 0, 00:13:31.420 "data_size": 65536 00:13:31.420 }, 00:13:31.420 { 00:13:31.420 "name": "BaseBdev4", 00:13:31.420 "uuid": "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82", 00:13:31.420 "is_configured": true, 00:13:31.420 "data_offset": 0, 00:13:31.420 "data_size": 65536 00:13:31.420 } 00:13:31.420 ] 00:13:31.420 }' 00:13:31.420 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.420 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.009 [2024-11-04 14:39:30.923004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.009 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.009 "name": "Existed_Raid", 00:13:32.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.010 "strip_size_kb": 64, 00:13:32.010 "state": "configuring", 00:13:32.010 "raid_level": "concat", 00:13:32.010 "superblock": false, 00:13:32.010 "num_base_bdevs": 4, 00:13:32.010 "num_base_bdevs_discovered": 2, 00:13:32.010 "num_base_bdevs_operational": 4, 00:13:32.010 "base_bdevs_list": [ 00:13:32.010 { 00:13:32.010 "name": "BaseBdev1", 00:13:32.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.010 "is_configured": false, 00:13:32.010 "data_offset": 0, 00:13:32.010 "data_size": 0 00:13:32.010 }, 00:13:32.010 { 00:13:32.010 "name": null, 00:13:32.010 "uuid": "6d0e4179-5b06-44a2-bec3-14903518370b", 00:13:32.010 "is_configured": false, 00:13:32.010 "data_offset": 0, 00:13:32.010 "data_size": 65536 00:13:32.010 }, 00:13:32.010 { 00:13:32.010 "name": "BaseBdev3", 00:13:32.010 "uuid": "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9", 00:13:32.010 "is_configured": true, 00:13:32.010 "data_offset": 0, 00:13:32.010 "data_size": 65536 00:13:32.010 }, 00:13:32.010 { 00:13:32.010 "name": "BaseBdev4", 00:13:32.010 "uuid": "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82", 00:13:32.010 "is_configured": true, 00:13:32.010 "data_offset": 0, 00:13:32.010 "data_size": 65536 00:13:32.010 } 00:13:32.010 ] 00:13:32.010 }' 00:13:32.010 14:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.010 14:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.596 [2024-11-04 14:39:31.551452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:32.596 BaseBdev1 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.596 [ 00:13:32.596 { 00:13:32.596 "name": "BaseBdev1", 00:13:32.596 "aliases": [ 00:13:32.596 "4b7fc163-29a0-42f7-85ca-b57ea82365f5" 00:13:32.596 ], 00:13:32.596 "product_name": "Malloc disk", 00:13:32.596 "block_size": 512, 00:13:32.596 "num_blocks": 65536, 00:13:32.596 "uuid": "4b7fc163-29a0-42f7-85ca-b57ea82365f5", 00:13:32.596 "assigned_rate_limits": { 00:13:32.596 "rw_ios_per_sec": 0, 00:13:32.596 "rw_mbytes_per_sec": 0, 00:13:32.596 "r_mbytes_per_sec": 0, 00:13:32.596 "w_mbytes_per_sec": 0 00:13:32.596 }, 00:13:32.596 "claimed": true, 00:13:32.596 "claim_type": "exclusive_write", 00:13:32.596 "zoned": false, 00:13:32.596 "supported_io_types": { 00:13:32.596 "read": true, 00:13:32.596 "write": true, 00:13:32.596 "unmap": true, 00:13:32.596 "flush": true, 00:13:32.596 "reset": true, 00:13:32.596 "nvme_admin": false, 00:13:32.596 "nvme_io": false, 00:13:32.596 "nvme_io_md": false, 00:13:32.596 "write_zeroes": true, 00:13:32.596 "zcopy": true, 00:13:32.596 "get_zone_info": false, 00:13:32.596 "zone_management": false, 00:13:32.596 "zone_append": false, 00:13:32.596 "compare": false, 00:13:32.596 "compare_and_write": false, 00:13:32.596 "abort": true, 00:13:32.596 "seek_hole": false, 00:13:32.596 "seek_data": false, 00:13:32.596 "copy": true, 00:13:32.596 "nvme_iov_md": false 00:13:32.596 }, 00:13:32.596 "memory_domains": [ 00:13:32.596 { 00:13:32.596 "dma_device_id": "system", 00:13:32.596 "dma_device_type": 1 00:13:32.596 }, 00:13:32.596 { 00:13:32.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.596 "dma_device_type": 2 00:13:32.596 } 00:13:32.596 ], 00:13:32.596 "driver_specific": {} 00:13:32.596 } 00:13:32.596 ] 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.596 "name": "Existed_Raid", 00:13:32.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.596 "strip_size_kb": 64, 00:13:32.596 "state": "configuring", 00:13:32.596 "raid_level": "concat", 00:13:32.596 "superblock": false, 00:13:32.596 "num_base_bdevs": 4, 00:13:32.596 "num_base_bdevs_discovered": 3, 00:13:32.596 "num_base_bdevs_operational": 4, 00:13:32.596 "base_bdevs_list": [ 00:13:32.596 { 00:13:32.596 "name": "BaseBdev1", 00:13:32.596 "uuid": "4b7fc163-29a0-42f7-85ca-b57ea82365f5", 00:13:32.596 "is_configured": true, 00:13:32.596 "data_offset": 0, 00:13:32.596 "data_size": 65536 00:13:32.596 }, 00:13:32.596 { 00:13:32.596 "name": null, 00:13:32.596 "uuid": "6d0e4179-5b06-44a2-bec3-14903518370b", 00:13:32.596 "is_configured": false, 00:13:32.596 "data_offset": 0, 00:13:32.596 "data_size": 65536 00:13:32.596 }, 00:13:32.596 { 00:13:32.596 "name": "BaseBdev3", 00:13:32.596 "uuid": "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9", 00:13:32.596 "is_configured": true, 00:13:32.596 "data_offset": 0, 00:13:32.596 "data_size": 65536 00:13:32.596 }, 00:13:32.596 { 00:13:32.596 "name": "BaseBdev4", 00:13:32.596 "uuid": "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82", 00:13:32.596 "is_configured": true, 00:13:32.596 "data_offset": 0, 00:13:32.596 "data_size": 65536 00:13:32.596 } 00:13:32.596 ] 00:13:32.596 }' 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.596 14:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.166 [2024-11-04 14:39:32.151711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.166 "name": "Existed_Raid", 00:13:33.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.166 "strip_size_kb": 64, 00:13:33.166 "state": "configuring", 00:13:33.166 "raid_level": "concat", 00:13:33.166 "superblock": false, 00:13:33.166 "num_base_bdevs": 4, 00:13:33.166 "num_base_bdevs_discovered": 2, 00:13:33.166 "num_base_bdevs_operational": 4, 00:13:33.166 "base_bdevs_list": [ 00:13:33.166 { 00:13:33.166 "name": "BaseBdev1", 00:13:33.166 "uuid": "4b7fc163-29a0-42f7-85ca-b57ea82365f5", 00:13:33.166 "is_configured": true, 00:13:33.166 "data_offset": 0, 00:13:33.166 "data_size": 65536 00:13:33.166 }, 00:13:33.166 { 00:13:33.166 "name": null, 00:13:33.166 "uuid": "6d0e4179-5b06-44a2-bec3-14903518370b", 00:13:33.166 "is_configured": false, 00:13:33.166 "data_offset": 0, 00:13:33.166 "data_size": 65536 00:13:33.166 }, 00:13:33.166 { 00:13:33.166 "name": null, 00:13:33.166 "uuid": "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9", 00:13:33.166 "is_configured": false, 00:13:33.166 "data_offset": 0, 00:13:33.166 "data_size": 65536 00:13:33.166 }, 00:13:33.166 { 00:13:33.166 "name": "BaseBdev4", 00:13:33.166 "uuid": "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82", 00:13:33.166 "is_configured": true, 00:13:33.166 "data_offset": 0, 00:13:33.166 "data_size": 65536 00:13:33.166 } 00:13:33.166 ] 00:13:33.166 }' 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.166 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.733 [2024-11-04 14:39:32.735809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.733 "name": "Existed_Raid", 00:13:33.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.733 "strip_size_kb": 64, 00:13:33.733 "state": "configuring", 00:13:33.733 "raid_level": "concat", 00:13:33.733 "superblock": false, 00:13:33.733 "num_base_bdevs": 4, 00:13:33.733 "num_base_bdevs_discovered": 3, 00:13:33.733 "num_base_bdevs_operational": 4, 00:13:33.733 "base_bdevs_list": [ 00:13:33.733 { 00:13:33.733 "name": "BaseBdev1", 00:13:33.733 "uuid": "4b7fc163-29a0-42f7-85ca-b57ea82365f5", 00:13:33.733 "is_configured": true, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 }, 00:13:33.733 { 00:13:33.733 "name": null, 00:13:33.733 "uuid": "6d0e4179-5b06-44a2-bec3-14903518370b", 00:13:33.733 "is_configured": false, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 }, 00:13:33.733 { 00:13:33.733 "name": "BaseBdev3", 00:13:33.733 "uuid": "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9", 00:13:33.733 "is_configured": true, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 }, 00:13:33.733 { 00:13:33.733 "name": "BaseBdev4", 00:13:33.733 "uuid": "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82", 00:13:33.733 "is_configured": true, 00:13:33.733 "data_offset": 0, 00:13:33.733 "data_size": 65536 00:13:33.733 } 00:13:33.733 ] 00:13:33.733 }' 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.733 14:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.301 [2024-11-04 14:39:33.324079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.301 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.561 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.561 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.561 "name": "Existed_Raid", 00:13:34.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.561 "strip_size_kb": 64, 00:13:34.561 "state": "configuring", 00:13:34.561 "raid_level": "concat", 00:13:34.561 "superblock": false, 00:13:34.561 "num_base_bdevs": 4, 00:13:34.561 "num_base_bdevs_discovered": 2, 00:13:34.561 "num_base_bdevs_operational": 4, 00:13:34.561 "base_bdevs_list": [ 00:13:34.561 { 00:13:34.561 "name": null, 00:13:34.561 "uuid": "4b7fc163-29a0-42f7-85ca-b57ea82365f5", 00:13:34.561 "is_configured": false, 00:13:34.561 "data_offset": 0, 00:13:34.561 "data_size": 65536 00:13:34.561 }, 00:13:34.561 { 00:13:34.561 "name": null, 00:13:34.561 "uuid": "6d0e4179-5b06-44a2-bec3-14903518370b", 00:13:34.561 "is_configured": false, 00:13:34.561 "data_offset": 0, 00:13:34.561 "data_size": 65536 00:13:34.561 }, 00:13:34.561 { 00:13:34.561 "name": "BaseBdev3", 00:13:34.561 "uuid": "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9", 00:13:34.561 "is_configured": true, 00:13:34.561 "data_offset": 0, 00:13:34.561 "data_size": 65536 00:13:34.561 }, 00:13:34.561 { 00:13:34.561 "name": "BaseBdev4", 00:13:34.561 "uuid": "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82", 00:13:34.561 "is_configured": true, 00:13:34.561 "data_offset": 0, 00:13:34.561 "data_size": 65536 00:13:34.561 } 00:13:34.561 ] 00:13:34.561 }' 00:13:34.561 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.561 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.828 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.828 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.828 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.828 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.828 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.088 [2024-11-04 14:39:33.964482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.088 14:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.088 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.088 "name": "Existed_Raid", 00:13:35.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.088 "strip_size_kb": 64, 00:13:35.088 "state": "configuring", 00:13:35.088 "raid_level": "concat", 00:13:35.088 "superblock": false, 00:13:35.088 "num_base_bdevs": 4, 00:13:35.088 "num_base_bdevs_discovered": 3, 00:13:35.088 "num_base_bdevs_operational": 4, 00:13:35.088 "base_bdevs_list": [ 00:13:35.088 { 00:13:35.088 "name": null, 00:13:35.088 "uuid": "4b7fc163-29a0-42f7-85ca-b57ea82365f5", 00:13:35.088 "is_configured": false, 00:13:35.088 "data_offset": 0, 00:13:35.088 "data_size": 65536 00:13:35.088 }, 00:13:35.088 { 00:13:35.088 "name": "BaseBdev2", 00:13:35.088 "uuid": "6d0e4179-5b06-44a2-bec3-14903518370b", 00:13:35.088 "is_configured": true, 00:13:35.088 "data_offset": 0, 00:13:35.088 "data_size": 65536 00:13:35.088 }, 00:13:35.088 { 00:13:35.088 "name": "BaseBdev3", 00:13:35.088 "uuid": "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9", 00:13:35.088 "is_configured": true, 00:13:35.088 "data_offset": 0, 00:13:35.088 "data_size": 65536 00:13:35.088 }, 00:13:35.088 { 00:13:35.088 "name": "BaseBdev4", 00:13:35.088 "uuid": "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82", 00:13:35.088 "is_configured": true, 00:13:35.088 "data_offset": 0, 00:13:35.088 "data_size": 65536 00:13:35.088 } 00:13:35.088 ] 00:13:35.088 }' 00:13:35.088 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.088 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4b7fc163-29a0-42f7-85ca-b57ea82365f5 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.655 [2024-11-04 14:39:34.643621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:35.655 [2024-11-04 14:39:34.643716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:35.655 [2024-11-04 14:39:34.643729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:35.655 [2024-11-04 14:39:34.644118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:35.655 [2024-11-04 14:39:34.644315] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:35.655 [2024-11-04 14:39:34.644349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:35.655 [2024-11-04 14:39:34.644661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.655 NewBaseBdev 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.655 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.655 [ 00:13:35.655 { 00:13:35.655 "name": "NewBaseBdev", 00:13:35.655 "aliases": [ 00:13:35.655 "4b7fc163-29a0-42f7-85ca-b57ea82365f5" 00:13:35.655 ], 00:13:35.655 "product_name": "Malloc disk", 00:13:35.655 "block_size": 512, 00:13:35.655 "num_blocks": 65536, 00:13:35.655 "uuid": "4b7fc163-29a0-42f7-85ca-b57ea82365f5", 00:13:35.655 "assigned_rate_limits": { 00:13:35.655 "rw_ios_per_sec": 0, 00:13:35.655 "rw_mbytes_per_sec": 0, 00:13:35.655 "r_mbytes_per_sec": 0, 00:13:35.655 "w_mbytes_per_sec": 0 00:13:35.655 }, 00:13:35.655 "claimed": true, 00:13:35.655 "claim_type": "exclusive_write", 00:13:35.655 "zoned": false, 00:13:35.656 "supported_io_types": { 00:13:35.656 "read": true, 00:13:35.656 "write": true, 00:13:35.656 "unmap": true, 00:13:35.656 "flush": true, 00:13:35.656 "reset": true, 00:13:35.656 "nvme_admin": false, 00:13:35.656 "nvme_io": false, 00:13:35.656 "nvme_io_md": false, 00:13:35.656 "write_zeroes": true, 00:13:35.656 "zcopy": true, 00:13:35.656 "get_zone_info": false, 00:13:35.656 "zone_management": false, 00:13:35.656 "zone_append": false, 00:13:35.656 "compare": false, 00:13:35.656 "compare_and_write": false, 00:13:35.656 "abort": true, 00:13:35.656 "seek_hole": false, 00:13:35.656 "seek_data": false, 00:13:35.656 "copy": true, 00:13:35.656 "nvme_iov_md": false 00:13:35.656 }, 00:13:35.656 "memory_domains": [ 00:13:35.656 { 00:13:35.656 "dma_device_id": "system", 00:13:35.656 "dma_device_type": 1 00:13:35.656 }, 00:13:35.656 { 00:13:35.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.656 "dma_device_type": 2 00:13:35.656 } 00:13:35.656 ], 00:13:35.656 "driver_specific": {} 00:13:35.656 } 00:13:35.656 ] 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.656 "name": "Existed_Raid", 00:13:35.656 "uuid": "fcfbe12b-68d5-4311-97bf-f39489209f10", 00:13:35.656 "strip_size_kb": 64, 00:13:35.656 "state": "online", 00:13:35.656 "raid_level": "concat", 00:13:35.656 "superblock": false, 00:13:35.656 "num_base_bdevs": 4, 00:13:35.656 "num_base_bdevs_discovered": 4, 00:13:35.656 "num_base_bdevs_operational": 4, 00:13:35.656 "base_bdevs_list": [ 00:13:35.656 { 00:13:35.656 "name": "NewBaseBdev", 00:13:35.656 "uuid": "4b7fc163-29a0-42f7-85ca-b57ea82365f5", 00:13:35.656 "is_configured": true, 00:13:35.656 "data_offset": 0, 00:13:35.656 "data_size": 65536 00:13:35.656 }, 00:13:35.656 { 00:13:35.656 "name": "BaseBdev2", 00:13:35.656 "uuid": "6d0e4179-5b06-44a2-bec3-14903518370b", 00:13:35.656 "is_configured": true, 00:13:35.656 "data_offset": 0, 00:13:35.656 "data_size": 65536 00:13:35.656 }, 00:13:35.656 { 00:13:35.656 "name": "BaseBdev3", 00:13:35.656 "uuid": "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9", 00:13:35.656 "is_configured": true, 00:13:35.656 "data_offset": 0, 00:13:35.656 "data_size": 65536 00:13:35.656 }, 00:13:35.656 { 00:13:35.656 "name": "BaseBdev4", 00:13:35.656 "uuid": "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82", 00:13:35.656 "is_configured": true, 00:13:35.656 "data_offset": 0, 00:13:35.656 "data_size": 65536 00:13:35.656 } 00:13:35.656 ] 00:13:35.656 }' 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.656 14:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.223 [2024-11-04 14:39:35.196281] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.223 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:36.223 "name": "Existed_Raid", 00:13:36.223 "aliases": [ 00:13:36.223 "fcfbe12b-68d5-4311-97bf-f39489209f10" 00:13:36.223 ], 00:13:36.223 "product_name": "Raid Volume", 00:13:36.223 "block_size": 512, 00:13:36.223 "num_blocks": 262144, 00:13:36.223 "uuid": "fcfbe12b-68d5-4311-97bf-f39489209f10", 00:13:36.223 "assigned_rate_limits": { 00:13:36.223 "rw_ios_per_sec": 0, 00:13:36.223 "rw_mbytes_per_sec": 0, 00:13:36.223 "r_mbytes_per_sec": 0, 00:13:36.223 "w_mbytes_per_sec": 0 00:13:36.223 }, 00:13:36.223 "claimed": false, 00:13:36.223 "zoned": false, 00:13:36.223 "supported_io_types": { 00:13:36.223 "read": true, 00:13:36.223 "write": true, 00:13:36.223 "unmap": true, 00:13:36.223 "flush": true, 00:13:36.223 "reset": true, 00:13:36.223 "nvme_admin": false, 00:13:36.223 "nvme_io": false, 00:13:36.223 "nvme_io_md": false, 00:13:36.223 "write_zeroes": true, 00:13:36.223 "zcopy": false, 00:13:36.223 "get_zone_info": false, 00:13:36.223 "zone_management": false, 00:13:36.223 "zone_append": false, 00:13:36.223 "compare": false, 00:13:36.223 "compare_and_write": false, 00:13:36.223 "abort": false, 00:13:36.223 "seek_hole": false, 00:13:36.223 "seek_data": false, 00:13:36.223 "copy": false, 00:13:36.223 "nvme_iov_md": false 00:13:36.223 }, 00:13:36.223 "memory_domains": [ 00:13:36.223 { 00:13:36.223 "dma_device_id": "system", 00:13:36.223 "dma_device_type": 1 00:13:36.223 }, 00:13:36.223 { 00:13:36.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.223 "dma_device_type": 2 00:13:36.223 }, 00:13:36.223 { 00:13:36.223 "dma_device_id": "system", 00:13:36.223 "dma_device_type": 1 00:13:36.223 }, 00:13:36.223 { 00:13:36.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.223 "dma_device_type": 2 00:13:36.223 }, 00:13:36.223 { 00:13:36.223 "dma_device_id": "system", 00:13:36.223 "dma_device_type": 1 00:13:36.223 }, 00:13:36.223 { 00:13:36.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.223 "dma_device_type": 2 00:13:36.223 }, 00:13:36.223 { 00:13:36.223 "dma_device_id": "system", 00:13:36.223 "dma_device_type": 1 00:13:36.223 }, 00:13:36.223 { 00:13:36.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.223 "dma_device_type": 2 00:13:36.223 } 00:13:36.223 ], 00:13:36.223 "driver_specific": { 00:13:36.223 "raid": { 00:13:36.223 "uuid": "fcfbe12b-68d5-4311-97bf-f39489209f10", 00:13:36.223 "strip_size_kb": 64, 00:13:36.223 "state": "online", 00:13:36.223 "raid_level": "concat", 00:13:36.223 "superblock": false, 00:13:36.223 "num_base_bdevs": 4, 00:13:36.223 "num_base_bdevs_discovered": 4, 00:13:36.223 "num_base_bdevs_operational": 4, 00:13:36.223 "base_bdevs_list": [ 00:13:36.223 { 00:13:36.223 "name": "NewBaseBdev", 00:13:36.223 "uuid": "4b7fc163-29a0-42f7-85ca-b57ea82365f5", 00:13:36.223 "is_configured": true, 00:13:36.224 "data_offset": 0, 00:13:36.224 "data_size": 65536 00:13:36.224 }, 00:13:36.224 { 00:13:36.224 "name": "BaseBdev2", 00:13:36.224 "uuid": "6d0e4179-5b06-44a2-bec3-14903518370b", 00:13:36.224 "is_configured": true, 00:13:36.224 "data_offset": 0, 00:13:36.224 "data_size": 65536 00:13:36.224 }, 00:13:36.224 { 00:13:36.224 "name": "BaseBdev3", 00:13:36.224 "uuid": "8b7ba3e0-bfcb-4208-90b3-f772d4623bd9", 00:13:36.224 "is_configured": true, 00:13:36.224 "data_offset": 0, 00:13:36.224 "data_size": 65536 00:13:36.224 }, 00:13:36.224 { 00:13:36.224 "name": "BaseBdev4", 00:13:36.224 "uuid": "dcbc2b4d-d82d-4c85-839c-87ed0d6eac82", 00:13:36.224 "is_configured": true, 00:13:36.224 "data_offset": 0, 00:13:36.224 "data_size": 65536 00:13:36.224 } 00:13:36.224 ] 00:13:36.224 } 00:13:36.224 } 00:13:36.224 }' 00:13:36.224 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.224 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:36.224 BaseBdev2 00:13:36.224 BaseBdev3 00:13:36.224 BaseBdev4' 00:13:36.224 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.224 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:36.224 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.483 [2024-11-04 14:39:35.575904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:36.483 [2024-11-04 14:39:35.575956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.483 [2024-11-04 14:39:35.576055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.483 [2024-11-04 14:39:35.576153] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.483 [2024-11-04 14:39:35.576171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71382 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71382 ']' 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71382 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:36.483 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71382 00:13:36.742 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:36.742 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:36.742 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71382' 00:13:36.742 killing process with pid 71382 00:13:36.742 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71382 00:13:36.742 [2024-11-04 14:39:35.611567] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:36.742 14:39:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71382 00:13:37.001 [2024-11-04 14:39:35.953399] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.935 14:39:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:37.935 00:13:37.936 real 0m12.835s 00:13:37.936 user 0m21.345s 00:13:37.936 sys 0m1.766s 00:13:37.936 14:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:37.936 14:39:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.936 ************************************ 00:13:37.936 END TEST raid_state_function_test 00:13:37.936 ************************************ 00:13:37.936 14:39:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:13:37.936 14:39:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:37.936 14:39:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:37.936 14:39:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.936 ************************************ 00:13:37.936 START TEST raid_state_function_test_sb 00:13:37.936 ************************************ 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.936 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72064 00:13:38.194 Process raid pid: 72064 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72064' 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72064 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72064 ']' 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.194 14:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.194 [2024-11-04 14:39:37.166805] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:13:38.194 [2024-11-04 14:39:37.167042] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.453 [2024-11-04 14:39:37.358188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.453 [2024-11-04 14:39:37.509799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.713 [2024-11-04 14:39:37.705955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.713 [2024-11-04 14:39:37.706028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.281 [2024-11-04 14:39:38.188404] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.281 [2024-11-04 14:39:38.188473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.281 [2024-11-04 14:39:38.188489] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.281 [2024-11-04 14:39:38.188504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.281 [2024-11-04 14:39:38.188514] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.281 [2024-11-04 14:39:38.188528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.281 [2024-11-04 14:39:38.188537] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:39.281 [2024-11-04 14:39:38.188550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.281 "name": "Existed_Raid", 00:13:39.281 "uuid": "a045992b-d109-43ac-ab1b-b9ebd4bc7f37", 00:13:39.281 "strip_size_kb": 64, 00:13:39.281 "state": "configuring", 00:13:39.281 "raid_level": "concat", 00:13:39.281 "superblock": true, 00:13:39.281 "num_base_bdevs": 4, 00:13:39.281 "num_base_bdevs_discovered": 0, 00:13:39.281 "num_base_bdevs_operational": 4, 00:13:39.281 "base_bdevs_list": [ 00:13:39.281 { 00:13:39.281 "name": "BaseBdev1", 00:13:39.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.281 "is_configured": false, 00:13:39.281 "data_offset": 0, 00:13:39.281 "data_size": 0 00:13:39.281 }, 00:13:39.281 { 00:13:39.281 "name": "BaseBdev2", 00:13:39.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.281 "is_configured": false, 00:13:39.281 "data_offset": 0, 00:13:39.281 "data_size": 0 00:13:39.281 }, 00:13:39.281 { 00:13:39.281 "name": "BaseBdev3", 00:13:39.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.281 "is_configured": false, 00:13:39.281 "data_offset": 0, 00:13:39.281 "data_size": 0 00:13:39.281 }, 00:13:39.281 { 00:13:39.281 "name": "BaseBdev4", 00:13:39.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.281 "is_configured": false, 00:13:39.281 "data_offset": 0, 00:13:39.281 "data_size": 0 00:13:39.281 } 00:13:39.281 ] 00:13:39.281 }' 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.281 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.849 [2024-11-04 14:39:38.708429] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.849 [2024-11-04 14:39:38.708499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.849 [2024-11-04 14:39:38.716403] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.849 [2024-11-04 14:39:38.716460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.849 [2024-11-04 14:39:38.716473] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.849 [2024-11-04 14:39:38.716488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.849 [2024-11-04 14:39:38.716496] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.849 [2024-11-04 14:39:38.716515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.849 [2024-11-04 14:39:38.716524] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:39.849 [2024-11-04 14:39:38.716537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.849 [2024-11-04 14:39:38.759369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.849 BaseBdev1 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.849 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.849 [ 00:13:39.849 { 00:13:39.849 "name": "BaseBdev1", 00:13:39.849 "aliases": [ 00:13:39.849 "5163769b-4d90-4956-920c-fe75189f87f0" 00:13:39.849 ], 00:13:39.849 "product_name": "Malloc disk", 00:13:39.849 "block_size": 512, 00:13:39.849 "num_blocks": 65536, 00:13:39.849 "uuid": "5163769b-4d90-4956-920c-fe75189f87f0", 00:13:39.849 "assigned_rate_limits": { 00:13:39.849 "rw_ios_per_sec": 0, 00:13:39.849 "rw_mbytes_per_sec": 0, 00:13:39.849 "r_mbytes_per_sec": 0, 00:13:39.849 "w_mbytes_per_sec": 0 00:13:39.849 }, 00:13:39.849 "claimed": true, 00:13:39.849 "claim_type": "exclusive_write", 00:13:39.850 "zoned": false, 00:13:39.850 "supported_io_types": { 00:13:39.850 "read": true, 00:13:39.850 "write": true, 00:13:39.850 "unmap": true, 00:13:39.850 "flush": true, 00:13:39.850 "reset": true, 00:13:39.850 "nvme_admin": false, 00:13:39.850 "nvme_io": false, 00:13:39.850 "nvme_io_md": false, 00:13:39.850 "write_zeroes": true, 00:13:39.850 "zcopy": true, 00:13:39.850 "get_zone_info": false, 00:13:39.850 "zone_management": false, 00:13:39.850 "zone_append": false, 00:13:39.850 "compare": false, 00:13:39.850 "compare_and_write": false, 00:13:39.850 "abort": true, 00:13:39.850 "seek_hole": false, 00:13:39.850 "seek_data": false, 00:13:39.850 "copy": true, 00:13:39.850 "nvme_iov_md": false 00:13:39.850 }, 00:13:39.850 "memory_domains": [ 00:13:39.850 { 00:13:39.850 "dma_device_id": "system", 00:13:39.850 "dma_device_type": 1 00:13:39.850 }, 00:13:39.850 { 00:13:39.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.850 "dma_device_type": 2 00:13:39.850 } 00:13:39.850 ], 00:13:39.850 "driver_specific": {} 00:13:39.850 } 00:13:39.850 ] 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.850 "name": "Existed_Raid", 00:13:39.850 "uuid": "4edc6c08-2bee-4ed1-a338-bf6cc32678ce", 00:13:39.850 "strip_size_kb": 64, 00:13:39.850 "state": "configuring", 00:13:39.850 "raid_level": "concat", 00:13:39.850 "superblock": true, 00:13:39.850 "num_base_bdevs": 4, 00:13:39.850 "num_base_bdevs_discovered": 1, 00:13:39.850 "num_base_bdevs_operational": 4, 00:13:39.850 "base_bdevs_list": [ 00:13:39.850 { 00:13:39.850 "name": "BaseBdev1", 00:13:39.850 "uuid": "5163769b-4d90-4956-920c-fe75189f87f0", 00:13:39.850 "is_configured": true, 00:13:39.850 "data_offset": 2048, 00:13:39.850 "data_size": 63488 00:13:39.850 }, 00:13:39.850 { 00:13:39.850 "name": "BaseBdev2", 00:13:39.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.850 "is_configured": false, 00:13:39.850 "data_offset": 0, 00:13:39.850 "data_size": 0 00:13:39.850 }, 00:13:39.850 { 00:13:39.850 "name": "BaseBdev3", 00:13:39.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.850 "is_configured": false, 00:13:39.850 "data_offset": 0, 00:13:39.850 "data_size": 0 00:13:39.850 }, 00:13:39.850 { 00:13:39.850 "name": "BaseBdev4", 00:13:39.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.850 "is_configured": false, 00:13:39.850 "data_offset": 0, 00:13:39.850 "data_size": 0 00:13:39.850 } 00:13:39.850 ] 00:13:39.850 }' 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.850 14:39:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.417 [2024-11-04 14:39:39.299594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:40.417 [2024-11-04 14:39:39.299675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.417 [2024-11-04 14:39:39.307647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.417 [2024-11-04 14:39:39.310107] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.417 [2024-11-04 14:39:39.310157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.417 [2024-11-04 14:39:39.310171] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.417 [2024-11-04 14:39:39.310188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.417 [2024-11-04 14:39:39.310198] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:40.417 [2024-11-04 14:39:39.310212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.417 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.418 "name": "Existed_Raid", 00:13:40.418 "uuid": "1f12a3a1-728b-4862-a269-269b997aa2cd", 00:13:40.418 "strip_size_kb": 64, 00:13:40.418 "state": "configuring", 00:13:40.418 "raid_level": "concat", 00:13:40.418 "superblock": true, 00:13:40.418 "num_base_bdevs": 4, 00:13:40.418 "num_base_bdevs_discovered": 1, 00:13:40.418 "num_base_bdevs_operational": 4, 00:13:40.418 "base_bdevs_list": [ 00:13:40.418 { 00:13:40.418 "name": "BaseBdev1", 00:13:40.418 "uuid": "5163769b-4d90-4956-920c-fe75189f87f0", 00:13:40.418 "is_configured": true, 00:13:40.418 "data_offset": 2048, 00:13:40.418 "data_size": 63488 00:13:40.418 }, 00:13:40.418 { 00:13:40.418 "name": "BaseBdev2", 00:13:40.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.418 "is_configured": false, 00:13:40.418 "data_offset": 0, 00:13:40.418 "data_size": 0 00:13:40.418 }, 00:13:40.418 { 00:13:40.418 "name": "BaseBdev3", 00:13:40.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.418 "is_configured": false, 00:13:40.418 "data_offset": 0, 00:13:40.418 "data_size": 0 00:13:40.418 }, 00:13:40.418 { 00:13:40.418 "name": "BaseBdev4", 00:13:40.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.418 "is_configured": false, 00:13:40.418 "data_offset": 0, 00:13:40.418 "data_size": 0 00:13:40.418 } 00:13:40.418 ] 00:13:40.418 }' 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.418 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.019 [2024-11-04 14:39:39.850498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.019 BaseBdev2 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.019 [ 00:13:41.019 { 00:13:41.019 "name": "BaseBdev2", 00:13:41.019 "aliases": [ 00:13:41.019 "f27a96c2-8db6-4963-a4d5-49bb64e9a9d0" 00:13:41.019 ], 00:13:41.019 "product_name": "Malloc disk", 00:13:41.019 "block_size": 512, 00:13:41.019 "num_blocks": 65536, 00:13:41.019 "uuid": "f27a96c2-8db6-4963-a4d5-49bb64e9a9d0", 00:13:41.019 "assigned_rate_limits": { 00:13:41.019 "rw_ios_per_sec": 0, 00:13:41.019 "rw_mbytes_per_sec": 0, 00:13:41.019 "r_mbytes_per_sec": 0, 00:13:41.019 "w_mbytes_per_sec": 0 00:13:41.019 }, 00:13:41.019 "claimed": true, 00:13:41.019 "claim_type": "exclusive_write", 00:13:41.019 "zoned": false, 00:13:41.019 "supported_io_types": { 00:13:41.019 "read": true, 00:13:41.019 "write": true, 00:13:41.019 "unmap": true, 00:13:41.019 "flush": true, 00:13:41.019 "reset": true, 00:13:41.019 "nvme_admin": false, 00:13:41.019 "nvme_io": false, 00:13:41.019 "nvme_io_md": false, 00:13:41.019 "write_zeroes": true, 00:13:41.019 "zcopy": true, 00:13:41.019 "get_zone_info": false, 00:13:41.019 "zone_management": false, 00:13:41.019 "zone_append": false, 00:13:41.019 "compare": false, 00:13:41.019 "compare_and_write": false, 00:13:41.019 "abort": true, 00:13:41.019 "seek_hole": false, 00:13:41.019 "seek_data": false, 00:13:41.019 "copy": true, 00:13:41.019 "nvme_iov_md": false 00:13:41.019 }, 00:13:41.019 "memory_domains": [ 00:13:41.019 { 00:13:41.019 "dma_device_id": "system", 00:13:41.019 "dma_device_type": 1 00:13:41.019 }, 00:13:41.019 { 00:13:41.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.019 "dma_device_type": 2 00:13:41.019 } 00:13:41.019 ], 00:13:41.019 "driver_specific": {} 00:13:41.019 } 00:13:41.019 ] 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.019 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.019 "name": "Existed_Raid", 00:13:41.019 "uuid": "1f12a3a1-728b-4862-a269-269b997aa2cd", 00:13:41.019 "strip_size_kb": 64, 00:13:41.019 "state": "configuring", 00:13:41.019 "raid_level": "concat", 00:13:41.019 "superblock": true, 00:13:41.019 "num_base_bdevs": 4, 00:13:41.019 "num_base_bdevs_discovered": 2, 00:13:41.019 "num_base_bdevs_operational": 4, 00:13:41.019 "base_bdevs_list": [ 00:13:41.019 { 00:13:41.019 "name": "BaseBdev1", 00:13:41.019 "uuid": "5163769b-4d90-4956-920c-fe75189f87f0", 00:13:41.019 "is_configured": true, 00:13:41.019 "data_offset": 2048, 00:13:41.019 "data_size": 63488 00:13:41.019 }, 00:13:41.019 { 00:13:41.019 "name": "BaseBdev2", 00:13:41.019 "uuid": "f27a96c2-8db6-4963-a4d5-49bb64e9a9d0", 00:13:41.019 "is_configured": true, 00:13:41.019 "data_offset": 2048, 00:13:41.019 "data_size": 63488 00:13:41.019 }, 00:13:41.019 { 00:13:41.019 "name": "BaseBdev3", 00:13:41.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.019 "is_configured": false, 00:13:41.019 "data_offset": 0, 00:13:41.019 "data_size": 0 00:13:41.019 }, 00:13:41.019 { 00:13:41.019 "name": "BaseBdev4", 00:13:41.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.020 "is_configured": false, 00:13:41.020 "data_offset": 0, 00:13:41.020 "data_size": 0 00:13:41.020 } 00:13:41.020 ] 00:13:41.020 }' 00:13:41.020 14:39:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.020 14:39:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.279 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:41.279 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.279 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.538 [2024-11-04 14:39:40.431429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.538 BaseBdev3 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.538 [ 00:13:41.538 { 00:13:41.538 "name": "BaseBdev3", 00:13:41.538 "aliases": [ 00:13:41.538 "4de974af-0d2a-47b1-b40e-542061d26ec7" 00:13:41.538 ], 00:13:41.538 "product_name": "Malloc disk", 00:13:41.538 "block_size": 512, 00:13:41.538 "num_blocks": 65536, 00:13:41.538 "uuid": "4de974af-0d2a-47b1-b40e-542061d26ec7", 00:13:41.538 "assigned_rate_limits": { 00:13:41.538 "rw_ios_per_sec": 0, 00:13:41.538 "rw_mbytes_per_sec": 0, 00:13:41.538 "r_mbytes_per_sec": 0, 00:13:41.538 "w_mbytes_per_sec": 0 00:13:41.538 }, 00:13:41.538 "claimed": true, 00:13:41.538 "claim_type": "exclusive_write", 00:13:41.538 "zoned": false, 00:13:41.538 "supported_io_types": { 00:13:41.538 "read": true, 00:13:41.538 "write": true, 00:13:41.538 "unmap": true, 00:13:41.538 "flush": true, 00:13:41.538 "reset": true, 00:13:41.538 "nvme_admin": false, 00:13:41.538 "nvme_io": false, 00:13:41.538 "nvme_io_md": false, 00:13:41.538 "write_zeroes": true, 00:13:41.538 "zcopy": true, 00:13:41.538 "get_zone_info": false, 00:13:41.538 "zone_management": false, 00:13:41.538 "zone_append": false, 00:13:41.538 "compare": false, 00:13:41.538 "compare_and_write": false, 00:13:41.538 "abort": true, 00:13:41.538 "seek_hole": false, 00:13:41.538 "seek_data": false, 00:13:41.538 "copy": true, 00:13:41.538 "nvme_iov_md": false 00:13:41.538 }, 00:13:41.538 "memory_domains": [ 00:13:41.538 { 00:13:41.538 "dma_device_id": "system", 00:13:41.538 "dma_device_type": 1 00:13:41.538 }, 00:13:41.538 { 00:13:41.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.538 "dma_device_type": 2 00:13:41.538 } 00:13:41.538 ], 00:13:41.538 "driver_specific": {} 00:13:41.538 } 00:13:41.538 ] 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.538 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.538 "name": "Existed_Raid", 00:13:41.538 "uuid": "1f12a3a1-728b-4862-a269-269b997aa2cd", 00:13:41.538 "strip_size_kb": 64, 00:13:41.538 "state": "configuring", 00:13:41.538 "raid_level": "concat", 00:13:41.538 "superblock": true, 00:13:41.538 "num_base_bdevs": 4, 00:13:41.538 "num_base_bdevs_discovered": 3, 00:13:41.538 "num_base_bdevs_operational": 4, 00:13:41.538 "base_bdevs_list": [ 00:13:41.538 { 00:13:41.538 "name": "BaseBdev1", 00:13:41.538 "uuid": "5163769b-4d90-4956-920c-fe75189f87f0", 00:13:41.538 "is_configured": true, 00:13:41.538 "data_offset": 2048, 00:13:41.538 "data_size": 63488 00:13:41.538 }, 00:13:41.538 { 00:13:41.538 "name": "BaseBdev2", 00:13:41.538 "uuid": "f27a96c2-8db6-4963-a4d5-49bb64e9a9d0", 00:13:41.538 "is_configured": true, 00:13:41.538 "data_offset": 2048, 00:13:41.538 "data_size": 63488 00:13:41.538 }, 00:13:41.538 { 00:13:41.538 "name": "BaseBdev3", 00:13:41.538 "uuid": "4de974af-0d2a-47b1-b40e-542061d26ec7", 00:13:41.538 "is_configured": true, 00:13:41.538 "data_offset": 2048, 00:13:41.538 "data_size": 63488 00:13:41.538 }, 00:13:41.538 { 00:13:41.538 "name": "BaseBdev4", 00:13:41.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.538 "is_configured": false, 00:13:41.539 "data_offset": 0, 00:13:41.539 "data_size": 0 00:13:41.539 } 00:13:41.539 ] 00:13:41.539 }' 00:13:41.539 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.539 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.106 14:39:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:42.106 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.106 14:39:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.106 [2024-11-04 14:39:41.016233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:42.106 [2024-11-04 14:39:41.016570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:42.106 [2024-11-04 14:39:41.016588] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:42.106 [2024-11-04 14:39:41.016890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:42.106 BaseBdev4 00:13:42.106 [2024-11-04 14:39:41.017100] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:42.106 [2024-11-04 14:39:41.017123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:42.106 [2024-11-04 14:39:41.017317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.106 [ 00:13:42.106 { 00:13:42.106 "name": "BaseBdev4", 00:13:42.106 "aliases": [ 00:13:42.106 "d943e27b-896b-48c1-b423-4272c19c8052" 00:13:42.106 ], 00:13:42.106 "product_name": "Malloc disk", 00:13:42.106 "block_size": 512, 00:13:42.106 "num_blocks": 65536, 00:13:42.106 "uuid": "d943e27b-896b-48c1-b423-4272c19c8052", 00:13:42.106 "assigned_rate_limits": { 00:13:42.106 "rw_ios_per_sec": 0, 00:13:42.106 "rw_mbytes_per_sec": 0, 00:13:42.106 "r_mbytes_per_sec": 0, 00:13:42.106 "w_mbytes_per_sec": 0 00:13:42.106 }, 00:13:42.106 "claimed": true, 00:13:42.106 "claim_type": "exclusive_write", 00:13:42.106 "zoned": false, 00:13:42.106 "supported_io_types": { 00:13:42.106 "read": true, 00:13:42.106 "write": true, 00:13:42.106 "unmap": true, 00:13:42.106 "flush": true, 00:13:42.106 "reset": true, 00:13:42.106 "nvme_admin": false, 00:13:42.106 "nvme_io": false, 00:13:42.106 "nvme_io_md": false, 00:13:42.106 "write_zeroes": true, 00:13:42.106 "zcopy": true, 00:13:42.106 "get_zone_info": false, 00:13:42.106 "zone_management": false, 00:13:42.106 "zone_append": false, 00:13:42.106 "compare": false, 00:13:42.106 "compare_and_write": false, 00:13:42.106 "abort": true, 00:13:42.106 "seek_hole": false, 00:13:42.106 "seek_data": false, 00:13:42.106 "copy": true, 00:13:42.106 "nvme_iov_md": false 00:13:42.106 }, 00:13:42.106 "memory_domains": [ 00:13:42.106 { 00:13:42.106 "dma_device_id": "system", 00:13:42.106 "dma_device_type": 1 00:13:42.106 }, 00:13:42.106 { 00:13:42.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.106 "dma_device_type": 2 00:13:42.106 } 00:13:42.106 ], 00:13:42.106 "driver_specific": {} 00:13:42.106 } 00:13:42.106 ] 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.106 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.106 "name": "Existed_Raid", 00:13:42.106 "uuid": "1f12a3a1-728b-4862-a269-269b997aa2cd", 00:13:42.106 "strip_size_kb": 64, 00:13:42.106 "state": "online", 00:13:42.106 "raid_level": "concat", 00:13:42.106 "superblock": true, 00:13:42.106 "num_base_bdevs": 4, 00:13:42.106 "num_base_bdevs_discovered": 4, 00:13:42.106 "num_base_bdevs_operational": 4, 00:13:42.106 "base_bdevs_list": [ 00:13:42.106 { 00:13:42.106 "name": "BaseBdev1", 00:13:42.106 "uuid": "5163769b-4d90-4956-920c-fe75189f87f0", 00:13:42.106 "is_configured": true, 00:13:42.107 "data_offset": 2048, 00:13:42.107 "data_size": 63488 00:13:42.107 }, 00:13:42.107 { 00:13:42.107 "name": "BaseBdev2", 00:13:42.107 "uuid": "f27a96c2-8db6-4963-a4d5-49bb64e9a9d0", 00:13:42.107 "is_configured": true, 00:13:42.107 "data_offset": 2048, 00:13:42.107 "data_size": 63488 00:13:42.107 }, 00:13:42.107 { 00:13:42.107 "name": "BaseBdev3", 00:13:42.107 "uuid": "4de974af-0d2a-47b1-b40e-542061d26ec7", 00:13:42.107 "is_configured": true, 00:13:42.107 "data_offset": 2048, 00:13:42.107 "data_size": 63488 00:13:42.107 }, 00:13:42.107 { 00:13:42.107 "name": "BaseBdev4", 00:13:42.107 "uuid": "d943e27b-896b-48c1-b423-4272c19c8052", 00:13:42.107 "is_configured": true, 00:13:42.107 "data_offset": 2048, 00:13:42.107 "data_size": 63488 00:13:42.107 } 00:13:42.107 ] 00:13:42.107 }' 00:13:42.107 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.107 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.696 [2024-11-04 14:39:41.576925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.696 "name": "Existed_Raid", 00:13:42.696 "aliases": [ 00:13:42.696 "1f12a3a1-728b-4862-a269-269b997aa2cd" 00:13:42.696 ], 00:13:42.696 "product_name": "Raid Volume", 00:13:42.696 "block_size": 512, 00:13:42.696 "num_blocks": 253952, 00:13:42.696 "uuid": "1f12a3a1-728b-4862-a269-269b997aa2cd", 00:13:42.696 "assigned_rate_limits": { 00:13:42.696 "rw_ios_per_sec": 0, 00:13:42.696 "rw_mbytes_per_sec": 0, 00:13:42.696 "r_mbytes_per_sec": 0, 00:13:42.696 "w_mbytes_per_sec": 0 00:13:42.696 }, 00:13:42.696 "claimed": false, 00:13:42.696 "zoned": false, 00:13:42.696 "supported_io_types": { 00:13:42.696 "read": true, 00:13:42.696 "write": true, 00:13:42.696 "unmap": true, 00:13:42.696 "flush": true, 00:13:42.696 "reset": true, 00:13:42.696 "nvme_admin": false, 00:13:42.696 "nvme_io": false, 00:13:42.696 "nvme_io_md": false, 00:13:42.696 "write_zeroes": true, 00:13:42.696 "zcopy": false, 00:13:42.696 "get_zone_info": false, 00:13:42.696 "zone_management": false, 00:13:42.696 "zone_append": false, 00:13:42.696 "compare": false, 00:13:42.696 "compare_and_write": false, 00:13:42.696 "abort": false, 00:13:42.696 "seek_hole": false, 00:13:42.696 "seek_data": false, 00:13:42.696 "copy": false, 00:13:42.696 "nvme_iov_md": false 00:13:42.696 }, 00:13:42.696 "memory_domains": [ 00:13:42.696 { 00:13:42.696 "dma_device_id": "system", 00:13:42.696 "dma_device_type": 1 00:13:42.696 }, 00:13:42.696 { 00:13:42.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.696 "dma_device_type": 2 00:13:42.696 }, 00:13:42.696 { 00:13:42.696 "dma_device_id": "system", 00:13:42.696 "dma_device_type": 1 00:13:42.696 }, 00:13:42.696 { 00:13:42.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.696 "dma_device_type": 2 00:13:42.696 }, 00:13:42.696 { 00:13:42.696 "dma_device_id": "system", 00:13:42.696 "dma_device_type": 1 00:13:42.696 }, 00:13:42.696 { 00:13:42.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.696 "dma_device_type": 2 00:13:42.696 }, 00:13:42.696 { 00:13:42.696 "dma_device_id": "system", 00:13:42.696 "dma_device_type": 1 00:13:42.696 }, 00:13:42.696 { 00:13:42.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.696 "dma_device_type": 2 00:13:42.696 } 00:13:42.696 ], 00:13:42.696 "driver_specific": { 00:13:42.696 "raid": { 00:13:42.696 "uuid": "1f12a3a1-728b-4862-a269-269b997aa2cd", 00:13:42.696 "strip_size_kb": 64, 00:13:42.696 "state": "online", 00:13:42.696 "raid_level": "concat", 00:13:42.696 "superblock": true, 00:13:42.696 "num_base_bdevs": 4, 00:13:42.696 "num_base_bdevs_discovered": 4, 00:13:42.696 "num_base_bdevs_operational": 4, 00:13:42.696 "base_bdevs_list": [ 00:13:42.696 { 00:13:42.696 "name": "BaseBdev1", 00:13:42.696 "uuid": "5163769b-4d90-4956-920c-fe75189f87f0", 00:13:42.696 "is_configured": true, 00:13:42.696 "data_offset": 2048, 00:13:42.696 "data_size": 63488 00:13:42.696 }, 00:13:42.696 { 00:13:42.696 "name": "BaseBdev2", 00:13:42.696 "uuid": "f27a96c2-8db6-4963-a4d5-49bb64e9a9d0", 00:13:42.696 "is_configured": true, 00:13:42.696 "data_offset": 2048, 00:13:42.696 "data_size": 63488 00:13:42.696 }, 00:13:42.696 { 00:13:42.696 "name": "BaseBdev3", 00:13:42.696 "uuid": "4de974af-0d2a-47b1-b40e-542061d26ec7", 00:13:42.696 "is_configured": true, 00:13:42.696 "data_offset": 2048, 00:13:42.696 "data_size": 63488 00:13:42.696 }, 00:13:42.696 { 00:13:42.696 "name": "BaseBdev4", 00:13:42.696 "uuid": "d943e27b-896b-48c1-b423-4272c19c8052", 00:13:42.696 "is_configured": true, 00:13:42.696 "data_offset": 2048, 00:13:42.696 "data_size": 63488 00:13:42.696 } 00:13:42.696 ] 00:13:42.696 } 00:13:42.696 } 00:13:42.696 }' 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:42.696 BaseBdev2 00:13:42.696 BaseBdev3 00:13:42.696 BaseBdev4' 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.696 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.956 14:39:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.956 [2024-11-04 14:39:41.956681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:42.956 [2024-11-04 14:39:41.956717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.956 [2024-11-04 14:39:41.956773] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.956 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.215 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.215 "name": "Existed_Raid", 00:13:43.215 "uuid": "1f12a3a1-728b-4862-a269-269b997aa2cd", 00:13:43.215 "strip_size_kb": 64, 00:13:43.215 "state": "offline", 00:13:43.215 "raid_level": "concat", 00:13:43.215 "superblock": true, 00:13:43.215 "num_base_bdevs": 4, 00:13:43.215 "num_base_bdevs_discovered": 3, 00:13:43.215 "num_base_bdevs_operational": 3, 00:13:43.215 "base_bdevs_list": [ 00:13:43.215 { 00:13:43.215 "name": null, 00:13:43.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.215 "is_configured": false, 00:13:43.215 "data_offset": 0, 00:13:43.215 "data_size": 63488 00:13:43.215 }, 00:13:43.215 { 00:13:43.215 "name": "BaseBdev2", 00:13:43.215 "uuid": "f27a96c2-8db6-4963-a4d5-49bb64e9a9d0", 00:13:43.215 "is_configured": true, 00:13:43.215 "data_offset": 2048, 00:13:43.215 "data_size": 63488 00:13:43.215 }, 00:13:43.215 { 00:13:43.215 "name": "BaseBdev3", 00:13:43.215 "uuid": "4de974af-0d2a-47b1-b40e-542061d26ec7", 00:13:43.215 "is_configured": true, 00:13:43.215 "data_offset": 2048, 00:13:43.215 "data_size": 63488 00:13:43.215 }, 00:13:43.215 { 00:13:43.215 "name": "BaseBdev4", 00:13:43.215 "uuid": "d943e27b-896b-48c1-b423-4272c19c8052", 00:13:43.215 "is_configured": true, 00:13:43.215 "data_offset": 2048, 00:13:43.215 "data_size": 63488 00:13:43.215 } 00:13:43.215 ] 00:13:43.215 }' 00:13:43.215 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.215 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.474 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:43.474 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.474 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.474 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:43.474 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.474 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.474 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.733 [2024-11-04 14:39:42.618099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.733 [2024-11-04 14:39:42.753480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:43.733 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 [2024-11-04 14:39:42.888416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:43.992 [2024-11-04 14:39:42.888470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 14:39:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 BaseBdev2 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.992 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.992 [ 00:13:43.992 { 00:13:43.992 "name": "BaseBdev2", 00:13:43.992 "aliases": [ 00:13:43.992 "ff8febb8-5a55-4539-a183-8841905263eb" 00:13:43.992 ], 00:13:43.992 "product_name": "Malloc disk", 00:13:43.992 "block_size": 512, 00:13:43.992 "num_blocks": 65536, 00:13:43.992 "uuid": "ff8febb8-5a55-4539-a183-8841905263eb", 00:13:43.992 "assigned_rate_limits": { 00:13:43.992 "rw_ios_per_sec": 0, 00:13:43.992 "rw_mbytes_per_sec": 0, 00:13:43.992 "r_mbytes_per_sec": 0, 00:13:43.992 "w_mbytes_per_sec": 0 00:13:43.992 }, 00:13:43.992 "claimed": false, 00:13:43.992 "zoned": false, 00:13:43.992 "supported_io_types": { 00:13:43.992 "read": true, 00:13:43.992 "write": true, 00:13:43.992 "unmap": true, 00:13:43.992 "flush": true, 00:13:43.992 "reset": true, 00:13:43.992 "nvme_admin": false, 00:13:43.992 "nvme_io": false, 00:13:43.992 "nvme_io_md": false, 00:13:43.992 "write_zeroes": true, 00:13:43.993 "zcopy": true, 00:13:43.993 "get_zone_info": false, 00:13:43.993 "zone_management": false, 00:13:43.993 "zone_append": false, 00:13:43.993 "compare": false, 00:13:43.993 "compare_and_write": false, 00:13:43.993 "abort": true, 00:13:43.993 "seek_hole": false, 00:13:43.993 "seek_data": false, 00:13:43.993 "copy": true, 00:13:43.993 "nvme_iov_md": false 00:13:43.993 }, 00:13:43.993 "memory_domains": [ 00:13:43.993 { 00:13:43.993 "dma_device_id": "system", 00:13:43.993 "dma_device_type": 1 00:13:43.993 }, 00:13:43.993 { 00:13:43.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.993 "dma_device_type": 2 00:13:43.993 } 00:13:43.993 ], 00:13:43.993 "driver_specific": {} 00:13:43.993 } 00:13:43.993 ] 00:13:43.993 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.993 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:43.993 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:43.993 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:43.993 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:43.993 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.993 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.252 BaseBdev3 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.252 [ 00:13:44.252 { 00:13:44.252 "name": "BaseBdev3", 00:13:44.252 "aliases": [ 00:13:44.252 "b0259b50-3829-44e9-91d7-93cf6c85bbc3" 00:13:44.252 ], 00:13:44.252 "product_name": "Malloc disk", 00:13:44.252 "block_size": 512, 00:13:44.252 "num_blocks": 65536, 00:13:44.252 "uuid": "b0259b50-3829-44e9-91d7-93cf6c85bbc3", 00:13:44.252 "assigned_rate_limits": { 00:13:44.252 "rw_ios_per_sec": 0, 00:13:44.252 "rw_mbytes_per_sec": 0, 00:13:44.252 "r_mbytes_per_sec": 0, 00:13:44.252 "w_mbytes_per_sec": 0 00:13:44.252 }, 00:13:44.252 "claimed": false, 00:13:44.252 "zoned": false, 00:13:44.252 "supported_io_types": { 00:13:44.252 "read": true, 00:13:44.252 "write": true, 00:13:44.252 "unmap": true, 00:13:44.252 "flush": true, 00:13:44.252 "reset": true, 00:13:44.252 "nvme_admin": false, 00:13:44.252 "nvme_io": false, 00:13:44.252 "nvme_io_md": false, 00:13:44.252 "write_zeroes": true, 00:13:44.252 "zcopy": true, 00:13:44.252 "get_zone_info": false, 00:13:44.252 "zone_management": false, 00:13:44.252 "zone_append": false, 00:13:44.252 "compare": false, 00:13:44.252 "compare_and_write": false, 00:13:44.252 "abort": true, 00:13:44.252 "seek_hole": false, 00:13:44.252 "seek_data": false, 00:13:44.252 "copy": true, 00:13:44.252 "nvme_iov_md": false 00:13:44.252 }, 00:13:44.252 "memory_domains": [ 00:13:44.252 { 00:13:44.252 "dma_device_id": "system", 00:13:44.252 "dma_device_type": 1 00:13:44.252 }, 00:13:44.252 { 00:13:44.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.252 "dma_device_type": 2 00:13:44.252 } 00:13:44.252 ], 00:13:44.252 "driver_specific": {} 00:13:44.252 } 00:13:44.252 ] 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.252 BaseBdev4 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.252 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.252 [ 00:13:44.252 { 00:13:44.252 "name": "BaseBdev4", 00:13:44.252 "aliases": [ 00:13:44.252 "dd490580-4593-4390-b286-e4ef95a2b23c" 00:13:44.252 ], 00:13:44.252 "product_name": "Malloc disk", 00:13:44.252 "block_size": 512, 00:13:44.252 "num_blocks": 65536, 00:13:44.252 "uuid": "dd490580-4593-4390-b286-e4ef95a2b23c", 00:13:44.252 "assigned_rate_limits": { 00:13:44.252 "rw_ios_per_sec": 0, 00:13:44.252 "rw_mbytes_per_sec": 0, 00:13:44.252 "r_mbytes_per_sec": 0, 00:13:44.252 "w_mbytes_per_sec": 0 00:13:44.252 }, 00:13:44.252 "claimed": false, 00:13:44.252 "zoned": false, 00:13:44.252 "supported_io_types": { 00:13:44.252 "read": true, 00:13:44.252 "write": true, 00:13:44.252 "unmap": true, 00:13:44.252 "flush": true, 00:13:44.252 "reset": true, 00:13:44.252 "nvme_admin": false, 00:13:44.252 "nvme_io": false, 00:13:44.252 "nvme_io_md": false, 00:13:44.252 "write_zeroes": true, 00:13:44.252 "zcopy": true, 00:13:44.252 "get_zone_info": false, 00:13:44.252 "zone_management": false, 00:13:44.252 "zone_append": false, 00:13:44.252 "compare": false, 00:13:44.253 "compare_and_write": false, 00:13:44.253 "abort": true, 00:13:44.253 "seek_hole": false, 00:13:44.253 "seek_data": false, 00:13:44.253 "copy": true, 00:13:44.253 "nvme_iov_md": false 00:13:44.253 }, 00:13:44.253 "memory_domains": [ 00:13:44.253 { 00:13:44.253 "dma_device_id": "system", 00:13:44.253 "dma_device_type": 1 00:13:44.253 }, 00:13:44.253 { 00:13:44.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.253 "dma_device_type": 2 00:13:44.253 } 00:13:44.253 ], 00:13:44.253 "driver_specific": {} 00:13:44.253 } 00:13:44.253 ] 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.253 [2024-11-04 14:39:43.262233] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.253 [2024-11-04 14:39:43.262492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.253 [2024-11-04 14:39:43.262537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.253 [2024-11-04 14:39:43.265047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.253 [2024-11-04 14:39:43.265135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.253 "name": "Existed_Raid", 00:13:44.253 "uuid": "b900c83a-70d0-49d9-809e-18143919227d", 00:13:44.253 "strip_size_kb": 64, 00:13:44.253 "state": "configuring", 00:13:44.253 "raid_level": "concat", 00:13:44.253 "superblock": true, 00:13:44.253 "num_base_bdevs": 4, 00:13:44.253 "num_base_bdevs_discovered": 3, 00:13:44.253 "num_base_bdevs_operational": 4, 00:13:44.253 "base_bdevs_list": [ 00:13:44.253 { 00:13:44.253 "name": "BaseBdev1", 00:13:44.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.253 "is_configured": false, 00:13:44.253 "data_offset": 0, 00:13:44.253 "data_size": 0 00:13:44.253 }, 00:13:44.253 { 00:13:44.253 "name": "BaseBdev2", 00:13:44.253 "uuid": "ff8febb8-5a55-4539-a183-8841905263eb", 00:13:44.253 "is_configured": true, 00:13:44.253 "data_offset": 2048, 00:13:44.253 "data_size": 63488 00:13:44.253 }, 00:13:44.253 { 00:13:44.253 "name": "BaseBdev3", 00:13:44.253 "uuid": "b0259b50-3829-44e9-91d7-93cf6c85bbc3", 00:13:44.253 "is_configured": true, 00:13:44.253 "data_offset": 2048, 00:13:44.253 "data_size": 63488 00:13:44.253 }, 00:13:44.253 { 00:13:44.253 "name": "BaseBdev4", 00:13:44.253 "uuid": "dd490580-4593-4390-b286-e4ef95a2b23c", 00:13:44.253 "is_configured": true, 00:13:44.253 "data_offset": 2048, 00:13:44.253 "data_size": 63488 00:13:44.253 } 00:13:44.253 ] 00:13:44.253 }' 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.253 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.821 [2024-11-04 14:39:43.794449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.821 "name": "Existed_Raid", 00:13:44.821 "uuid": "b900c83a-70d0-49d9-809e-18143919227d", 00:13:44.821 "strip_size_kb": 64, 00:13:44.821 "state": "configuring", 00:13:44.821 "raid_level": "concat", 00:13:44.821 "superblock": true, 00:13:44.821 "num_base_bdevs": 4, 00:13:44.821 "num_base_bdevs_discovered": 2, 00:13:44.821 "num_base_bdevs_operational": 4, 00:13:44.821 "base_bdevs_list": [ 00:13:44.821 { 00:13:44.821 "name": "BaseBdev1", 00:13:44.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.821 "is_configured": false, 00:13:44.821 "data_offset": 0, 00:13:44.821 "data_size": 0 00:13:44.821 }, 00:13:44.821 { 00:13:44.821 "name": null, 00:13:44.821 "uuid": "ff8febb8-5a55-4539-a183-8841905263eb", 00:13:44.821 "is_configured": false, 00:13:44.821 "data_offset": 0, 00:13:44.821 "data_size": 63488 00:13:44.821 }, 00:13:44.821 { 00:13:44.821 "name": "BaseBdev3", 00:13:44.821 "uuid": "b0259b50-3829-44e9-91d7-93cf6c85bbc3", 00:13:44.821 "is_configured": true, 00:13:44.821 "data_offset": 2048, 00:13:44.821 "data_size": 63488 00:13:44.821 }, 00:13:44.821 { 00:13:44.821 "name": "BaseBdev4", 00:13:44.821 "uuid": "dd490580-4593-4390-b286-e4ef95a2b23c", 00:13:44.821 "is_configured": true, 00:13:44.821 "data_offset": 2048, 00:13:44.821 "data_size": 63488 00:13:44.821 } 00:13:44.821 ] 00:13:44.821 }' 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.821 14:39:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.388 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.389 [2024-11-04 14:39:44.433909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.389 BaseBdev1 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.389 [ 00:13:45.389 { 00:13:45.389 "name": "BaseBdev1", 00:13:45.389 "aliases": [ 00:13:45.389 "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c" 00:13:45.389 ], 00:13:45.389 "product_name": "Malloc disk", 00:13:45.389 "block_size": 512, 00:13:45.389 "num_blocks": 65536, 00:13:45.389 "uuid": "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c", 00:13:45.389 "assigned_rate_limits": { 00:13:45.389 "rw_ios_per_sec": 0, 00:13:45.389 "rw_mbytes_per_sec": 0, 00:13:45.389 "r_mbytes_per_sec": 0, 00:13:45.389 "w_mbytes_per_sec": 0 00:13:45.389 }, 00:13:45.389 "claimed": true, 00:13:45.389 "claim_type": "exclusive_write", 00:13:45.389 "zoned": false, 00:13:45.389 "supported_io_types": { 00:13:45.389 "read": true, 00:13:45.389 "write": true, 00:13:45.389 "unmap": true, 00:13:45.389 "flush": true, 00:13:45.389 "reset": true, 00:13:45.389 "nvme_admin": false, 00:13:45.389 "nvme_io": false, 00:13:45.389 "nvme_io_md": false, 00:13:45.389 "write_zeroes": true, 00:13:45.389 "zcopy": true, 00:13:45.389 "get_zone_info": false, 00:13:45.389 "zone_management": false, 00:13:45.389 "zone_append": false, 00:13:45.389 "compare": false, 00:13:45.389 "compare_and_write": false, 00:13:45.389 "abort": true, 00:13:45.389 "seek_hole": false, 00:13:45.389 "seek_data": false, 00:13:45.389 "copy": true, 00:13:45.389 "nvme_iov_md": false 00:13:45.389 }, 00:13:45.389 "memory_domains": [ 00:13:45.389 { 00:13:45.389 "dma_device_id": "system", 00:13:45.389 "dma_device_type": 1 00:13:45.389 }, 00:13:45.389 { 00:13:45.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.389 "dma_device_type": 2 00:13:45.389 } 00:13:45.389 ], 00:13:45.389 "driver_specific": {} 00:13:45.389 } 00:13:45.389 ] 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.389 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.648 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.648 "name": "Existed_Raid", 00:13:45.648 "uuid": "b900c83a-70d0-49d9-809e-18143919227d", 00:13:45.648 "strip_size_kb": 64, 00:13:45.648 "state": "configuring", 00:13:45.648 "raid_level": "concat", 00:13:45.648 "superblock": true, 00:13:45.648 "num_base_bdevs": 4, 00:13:45.648 "num_base_bdevs_discovered": 3, 00:13:45.648 "num_base_bdevs_operational": 4, 00:13:45.648 "base_bdevs_list": [ 00:13:45.648 { 00:13:45.648 "name": "BaseBdev1", 00:13:45.648 "uuid": "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c", 00:13:45.648 "is_configured": true, 00:13:45.648 "data_offset": 2048, 00:13:45.648 "data_size": 63488 00:13:45.648 }, 00:13:45.648 { 00:13:45.648 "name": null, 00:13:45.648 "uuid": "ff8febb8-5a55-4539-a183-8841905263eb", 00:13:45.648 "is_configured": false, 00:13:45.648 "data_offset": 0, 00:13:45.648 "data_size": 63488 00:13:45.648 }, 00:13:45.648 { 00:13:45.648 "name": "BaseBdev3", 00:13:45.648 "uuid": "b0259b50-3829-44e9-91d7-93cf6c85bbc3", 00:13:45.648 "is_configured": true, 00:13:45.648 "data_offset": 2048, 00:13:45.648 "data_size": 63488 00:13:45.648 }, 00:13:45.648 { 00:13:45.648 "name": "BaseBdev4", 00:13:45.648 "uuid": "dd490580-4593-4390-b286-e4ef95a2b23c", 00:13:45.648 "is_configured": true, 00:13:45.648 "data_offset": 2048, 00:13:45.648 "data_size": 63488 00:13:45.648 } 00:13:45.648 ] 00:13:45.648 }' 00:13:45.648 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.648 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.907 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.907 14:39:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:45.907 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.907 14:39:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.907 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.165 [2024-11-04 14:39:45.038246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.165 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.165 "name": "Existed_Raid", 00:13:46.165 "uuid": "b900c83a-70d0-49d9-809e-18143919227d", 00:13:46.165 "strip_size_kb": 64, 00:13:46.165 "state": "configuring", 00:13:46.165 "raid_level": "concat", 00:13:46.165 "superblock": true, 00:13:46.165 "num_base_bdevs": 4, 00:13:46.165 "num_base_bdevs_discovered": 2, 00:13:46.165 "num_base_bdevs_operational": 4, 00:13:46.165 "base_bdevs_list": [ 00:13:46.165 { 00:13:46.165 "name": "BaseBdev1", 00:13:46.165 "uuid": "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c", 00:13:46.165 "is_configured": true, 00:13:46.165 "data_offset": 2048, 00:13:46.165 "data_size": 63488 00:13:46.165 }, 00:13:46.165 { 00:13:46.165 "name": null, 00:13:46.165 "uuid": "ff8febb8-5a55-4539-a183-8841905263eb", 00:13:46.165 "is_configured": false, 00:13:46.165 "data_offset": 0, 00:13:46.165 "data_size": 63488 00:13:46.165 }, 00:13:46.165 { 00:13:46.165 "name": null, 00:13:46.165 "uuid": "b0259b50-3829-44e9-91d7-93cf6c85bbc3", 00:13:46.165 "is_configured": false, 00:13:46.165 "data_offset": 0, 00:13:46.165 "data_size": 63488 00:13:46.165 }, 00:13:46.165 { 00:13:46.165 "name": "BaseBdev4", 00:13:46.165 "uuid": "dd490580-4593-4390-b286-e4ef95a2b23c", 00:13:46.165 "is_configured": true, 00:13:46.165 "data_offset": 2048, 00:13:46.166 "data_size": 63488 00:13:46.166 } 00:13:46.166 ] 00:13:46.166 }' 00:13:46.166 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.166 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.733 [2024-11-04 14:39:45.762532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.733 "name": "Existed_Raid", 00:13:46.733 "uuid": "b900c83a-70d0-49d9-809e-18143919227d", 00:13:46.733 "strip_size_kb": 64, 00:13:46.733 "state": "configuring", 00:13:46.733 "raid_level": "concat", 00:13:46.733 "superblock": true, 00:13:46.733 "num_base_bdevs": 4, 00:13:46.733 "num_base_bdevs_discovered": 3, 00:13:46.733 "num_base_bdevs_operational": 4, 00:13:46.733 "base_bdevs_list": [ 00:13:46.733 { 00:13:46.733 "name": "BaseBdev1", 00:13:46.733 "uuid": "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c", 00:13:46.733 "is_configured": true, 00:13:46.733 "data_offset": 2048, 00:13:46.733 "data_size": 63488 00:13:46.733 }, 00:13:46.733 { 00:13:46.733 "name": null, 00:13:46.733 "uuid": "ff8febb8-5a55-4539-a183-8841905263eb", 00:13:46.733 "is_configured": false, 00:13:46.733 "data_offset": 0, 00:13:46.733 "data_size": 63488 00:13:46.733 }, 00:13:46.733 { 00:13:46.733 "name": "BaseBdev3", 00:13:46.733 "uuid": "b0259b50-3829-44e9-91d7-93cf6c85bbc3", 00:13:46.733 "is_configured": true, 00:13:46.733 "data_offset": 2048, 00:13:46.733 "data_size": 63488 00:13:46.733 }, 00:13:46.733 { 00:13:46.733 "name": "BaseBdev4", 00:13:46.733 "uuid": "dd490580-4593-4390-b286-e4ef95a2b23c", 00:13:46.733 "is_configured": true, 00:13:46.733 "data_offset": 2048, 00:13:46.733 "data_size": 63488 00:13:46.733 } 00:13:46.733 ] 00:13:46.733 }' 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.733 14:39:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.301 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:47.301 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.301 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.301 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.301 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.302 [2024-11-04 14:39:46.318784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.302 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.560 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.560 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.560 "name": "Existed_Raid", 00:13:47.561 "uuid": "b900c83a-70d0-49d9-809e-18143919227d", 00:13:47.561 "strip_size_kb": 64, 00:13:47.561 "state": "configuring", 00:13:47.561 "raid_level": "concat", 00:13:47.561 "superblock": true, 00:13:47.561 "num_base_bdevs": 4, 00:13:47.561 "num_base_bdevs_discovered": 2, 00:13:47.561 "num_base_bdevs_operational": 4, 00:13:47.561 "base_bdevs_list": [ 00:13:47.561 { 00:13:47.561 "name": null, 00:13:47.561 "uuid": "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c", 00:13:47.561 "is_configured": false, 00:13:47.561 "data_offset": 0, 00:13:47.561 "data_size": 63488 00:13:47.561 }, 00:13:47.561 { 00:13:47.561 "name": null, 00:13:47.561 "uuid": "ff8febb8-5a55-4539-a183-8841905263eb", 00:13:47.561 "is_configured": false, 00:13:47.561 "data_offset": 0, 00:13:47.561 "data_size": 63488 00:13:47.561 }, 00:13:47.561 { 00:13:47.561 "name": "BaseBdev3", 00:13:47.561 "uuid": "b0259b50-3829-44e9-91d7-93cf6c85bbc3", 00:13:47.561 "is_configured": true, 00:13:47.561 "data_offset": 2048, 00:13:47.561 "data_size": 63488 00:13:47.561 }, 00:13:47.561 { 00:13:47.561 "name": "BaseBdev4", 00:13:47.561 "uuid": "dd490580-4593-4390-b286-e4ef95a2b23c", 00:13:47.561 "is_configured": true, 00:13:47.561 "data_offset": 2048, 00:13:47.561 "data_size": 63488 00:13:47.561 } 00:13:47.561 ] 00:13:47.561 }' 00:13:47.561 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.561 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.820 [2024-11-04 14:39:46.924827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.820 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.079 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.079 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.079 "name": "Existed_Raid", 00:13:48.079 "uuid": "b900c83a-70d0-49d9-809e-18143919227d", 00:13:48.079 "strip_size_kb": 64, 00:13:48.079 "state": "configuring", 00:13:48.079 "raid_level": "concat", 00:13:48.079 "superblock": true, 00:13:48.079 "num_base_bdevs": 4, 00:13:48.079 "num_base_bdevs_discovered": 3, 00:13:48.079 "num_base_bdevs_operational": 4, 00:13:48.079 "base_bdevs_list": [ 00:13:48.079 { 00:13:48.079 "name": null, 00:13:48.079 "uuid": "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c", 00:13:48.079 "is_configured": false, 00:13:48.079 "data_offset": 0, 00:13:48.079 "data_size": 63488 00:13:48.079 }, 00:13:48.079 { 00:13:48.079 "name": "BaseBdev2", 00:13:48.079 "uuid": "ff8febb8-5a55-4539-a183-8841905263eb", 00:13:48.079 "is_configured": true, 00:13:48.079 "data_offset": 2048, 00:13:48.079 "data_size": 63488 00:13:48.079 }, 00:13:48.079 { 00:13:48.079 "name": "BaseBdev3", 00:13:48.079 "uuid": "b0259b50-3829-44e9-91d7-93cf6c85bbc3", 00:13:48.079 "is_configured": true, 00:13:48.079 "data_offset": 2048, 00:13:48.079 "data_size": 63488 00:13:48.079 }, 00:13:48.079 { 00:13:48.079 "name": "BaseBdev4", 00:13:48.079 "uuid": "dd490580-4593-4390-b286-e4ef95a2b23c", 00:13:48.079 "is_configured": true, 00:13:48.079 "data_offset": 2048, 00:13:48.079 "data_size": 63488 00:13:48.079 } 00:13:48.079 ] 00:13:48.079 }' 00:13:48.079 14:39:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.079 14:39:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.338 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:48.338 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.338 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.338 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.338 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.597 [2024-11-04 14:39:47.566444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:48.597 [2024-11-04 14:39:47.567030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:48.597 [2024-11-04 14:39:47.567055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:48.597 NewBaseBdev 00:13:48.597 [2024-11-04 14:39:47.567439] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:48.597 [2024-11-04 14:39:47.567644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:48.597 [2024-11-04 14:39:47.567672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:48.597 [2024-11-04 14:39:47.567825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.597 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.597 [ 00:13:48.597 { 00:13:48.597 "name": "NewBaseBdev", 00:13:48.597 "aliases": [ 00:13:48.597 "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c" 00:13:48.597 ], 00:13:48.597 "product_name": "Malloc disk", 00:13:48.597 "block_size": 512, 00:13:48.597 "num_blocks": 65536, 00:13:48.597 "uuid": "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c", 00:13:48.597 "assigned_rate_limits": { 00:13:48.597 "rw_ios_per_sec": 0, 00:13:48.597 "rw_mbytes_per_sec": 0, 00:13:48.597 "r_mbytes_per_sec": 0, 00:13:48.597 "w_mbytes_per_sec": 0 00:13:48.597 }, 00:13:48.597 "claimed": true, 00:13:48.597 "claim_type": "exclusive_write", 00:13:48.597 "zoned": false, 00:13:48.597 "supported_io_types": { 00:13:48.597 "read": true, 00:13:48.597 "write": true, 00:13:48.597 "unmap": true, 00:13:48.597 "flush": true, 00:13:48.597 "reset": true, 00:13:48.597 "nvme_admin": false, 00:13:48.597 "nvme_io": false, 00:13:48.597 "nvme_io_md": false, 00:13:48.597 "write_zeroes": true, 00:13:48.597 "zcopy": true, 00:13:48.597 "get_zone_info": false, 00:13:48.597 "zone_management": false, 00:13:48.597 "zone_append": false, 00:13:48.598 "compare": false, 00:13:48.598 "compare_and_write": false, 00:13:48.598 "abort": true, 00:13:48.598 "seek_hole": false, 00:13:48.598 "seek_data": false, 00:13:48.598 "copy": true, 00:13:48.598 "nvme_iov_md": false 00:13:48.598 }, 00:13:48.598 "memory_domains": [ 00:13:48.598 { 00:13:48.598 "dma_device_id": "system", 00:13:48.598 "dma_device_type": 1 00:13:48.598 }, 00:13:48.598 { 00:13:48.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.598 "dma_device_type": 2 00:13:48.598 } 00:13:48.598 ], 00:13:48.598 "driver_specific": {} 00:13:48.598 } 00:13:48.598 ] 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.598 "name": "Existed_Raid", 00:13:48.598 "uuid": "b900c83a-70d0-49d9-809e-18143919227d", 00:13:48.598 "strip_size_kb": 64, 00:13:48.598 "state": "online", 00:13:48.598 "raid_level": "concat", 00:13:48.598 "superblock": true, 00:13:48.598 "num_base_bdevs": 4, 00:13:48.598 "num_base_bdevs_discovered": 4, 00:13:48.598 "num_base_bdevs_operational": 4, 00:13:48.598 "base_bdevs_list": [ 00:13:48.598 { 00:13:48.598 "name": "NewBaseBdev", 00:13:48.598 "uuid": "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c", 00:13:48.598 "is_configured": true, 00:13:48.598 "data_offset": 2048, 00:13:48.598 "data_size": 63488 00:13:48.598 }, 00:13:48.598 { 00:13:48.598 "name": "BaseBdev2", 00:13:48.598 "uuid": "ff8febb8-5a55-4539-a183-8841905263eb", 00:13:48.598 "is_configured": true, 00:13:48.598 "data_offset": 2048, 00:13:48.598 "data_size": 63488 00:13:48.598 }, 00:13:48.598 { 00:13:48.598 "name": "BaseBdev3", 00:13:48.598 "uuid": "b0259b50-3829-44e9-91d7-93cf6c85bbc3", 00:13:48.598 "is_configured": true, 00:13:48.598 "data_offset": 2048, 00:13:48.598 "data_size": 63488 00:13:48.598 }, 00:13:48.598 { 00:13:48.598 "name": "BaseBdev4", 00:13:48.598 "uuid": "dd490580-4593-4390-b286-e4ef95a2b23c", 00:13:48.598 "is_configured": true, 00:13:48.598 "data_offset": 2048, 00:13:48.598 "data_size": 63488 00:13:48.598 } 00:13:48.598 ] 00:13:48.598 }' 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.598 14:39:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.200 [2024-11-04 14:39:48.143090] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.200 "name": "Existed_Raid", 00:13:49.200 "aliases": [ 00:13:49.200 "b900c83a-70d0-49d9-809e-18143919227d" 00:13:49.200 ], 00:13:49.200 "product_name": "Raid Volume", 00:13:49.200 "block_size": 512, 00:13:49.200 "num_blocks": 253952, 00:13:49.200 "uuid": "b900c83a-70d0-49d9-809e-18143919227d", 00:13:49.200 "assigned_rate_limits": { 00:13:49.200 "rw_ios_per_sec": 0, 00:13:49.200 "rw_mbytes_per_sec": 0, 00:13:49.200 "r_mbytes_per_sec": 0, 00:13:49.200 "w_mbytes_per_sec": 0 00:13:49.200 }, 00:13:49.200 "claimed": false, 00:13:49.200 "zoned": false, 00:13:49.200 "supported_io_types": { 00:13:49.200 "read": true, 00:13:49.200 "write": true, 00:13:49.200 "unmap": true, 00:13:49.200 "flush": true, 00:13:49.200 "reset": true, 00:13:49.200 "nvme_admin": false, 00:13:49.200 "nvme_io": false, 00:13:49.200 "nvme_io_md": false, 00:13:49.200 "write_zeroes": true, 00:13:49.200 "zcopy": false, 00:13:49.200 "get_zone_info": false, 00:13:49.200 "zone_management": false, 00:13:49.200 "zone_append": false, 00:13:49.200 "compare": false, 00:13:49.200 "compare_and_write": false, 00:13:49.200 "abort": false, 00:13:49.200 "seek_hole": false, 00:13:49.200 "seek_data": false, 00:13:49.200 "copy": false, 00:13:49.200 "nvme_iov_md": false 00:13:49.200 }, 00:13:49.200 "memory_domains": [ 00:13:49.200 { 00:13:49.200 "dma_device_id": "system", 00:13:49.200 "dma_device_type": 1 00:13:49.200 }, 00:13:49.200 { 00:13:49.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.200 "dma_device_type": 2 00:13:49.200 }, 00:13:49.200 { 00:13:49.200 "dma_device_id": "system", 00:13:49.200 "dma_device_type": 1 00:13:49.200 }, 00:13:49.200 { 00:13:49.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.200 "dma_device_type": 2 00:13:49.200 }, 00:13:49.200 { 00:13:49.200 "dma_device_id": "system", 00:13:49.200 "dma_device_type": 1 00:13:49.200 }, 00:13:49.200 { 00:13:49.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.200 "dma_device_type": 2 00:13:49.200 }, 00:13:49.200 { 00:13:49.200 "dma_device_id": "system", 00:13:49.200 "dma_device_type": 1 00:13:49.200 }, 00:13:49.200 { 00:13:49.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.200 "dma_device_type": 2 00:13:49.200 } 00:13:49.200 ], 00:13:49.200 "driver_specific": { 00:13:49.200 "raid": { 00:13:49.200 "uuid": "b900c83a-70d0-49d9-809e-18143919227d", 00:13:49.200 "strip_size_kb": 64, 00:13:49.200 "state": "online", 00:13:49.200 "raid_level": "concat", 00:13:49.200 "superblock": true, 00:13:49.200 "num_base_bdevs": 4, 00:13:49.200 "num_base_bdevs_discovered": 4, 00:13:49.200 "num_base_bdevs_operational": 4, 00:13:49.200 "base_bdevs_list": [ 00:13:49.200 { 00:13:49.200 "name": "NewBaseBdev", 00:13:49.200 "uuid": "a4fa7ff4-6bb1-46fa-94e8-74dbd1f31e2c", 00:13:49.200 "is_configured": true, 00:13:49.200 "data_offset": 2048, 00:13:49.200 "data_size": 63488 00:13:49.200 }, 00:13:49.200 { 00:13:49.200 "name": "BaseBdev2", 00:13:49.200 "uuid": "ff8febb8-5a55-4539-a183-8841905263eb", 00:13:49.200 "is_configured": true, 00:13:49.200 "data_offset": 2048, 00:13:49.200 "data_size": 63488 00:13:49.200 }, 00:13:49.200 { 00:13:49.200 "name": "BaseBdev3", 00:13:49.200 "uuid": "b0259b50-3829-44e9-91d7-93cf6c85bbc3", 00:13:49.200 "is_configured": true, 00:13:49.200 "data_offset": 2048, 00:13:49.200 "data_size": 63488 00:13:49.200 }, 00:13:49.200 { 00:13:49.200 "name": "BaseBdev4", 00:13:49.200 "uuid": "dd490580-4593-4390-b286-e4ef95a2b23c", 00:13:49.200 "is_configured": true, 00:13:49.200 "data_offset": 2048, 00:13:49.200 "data_size": 63488 00:13:49.200 } 00:13:49.200 ] 00:13:49.200 } 00:13:49.200 } 00:13:49.200 }' 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:49.200 BaseBdev2 00:13:49.200 BaseBdev3 00:13:49.200 BaseBdev4' 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.200 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.460 [2024-11-04 14:39:48.526710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:49.460 [2024-11-04 14:39:48.526908] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.460 [2024-11-04 14:39:48.527023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.460 [2024-11-04 14:39:48.527113] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.460 [2024-11-04 14:39:48.527130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72064 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72064 ']' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72064 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72064 00:13:49.460 killing process with pid 72064 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72064' 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72064 00:13:49.460 [2024-11-04 14:39:48.566152] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.460 14:39:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72064 00:13:50.028 [2024-11-04 14:39:48.908170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.964 ************************************ 00:13:50.964 END TEST raid_state_function_test_sb 00:13:50.964 ************************************ 00:13:50.964 14:39:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:50.964 00:13:50.964 real 0m12.865s 00:13:50.964 user 0m21.415s 00:13:50.964 sys 0m1.796s 00:13:50.964 14:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:50.964 14:39:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.964 14:39:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:50.964 14:39:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:50.964 14:39:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:50.964 14:39:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.964 ************************************ 00:13:50.964 START TEST raid_superblock_test 00:13:50.964 ************************************ 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:50.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72748 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72748 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72748 ']' 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:50.964 14:39:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.964 [2024-11-04 14:39:50.063557] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:13:50.964 [2024-11-04 14:39:50.063701] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72748 ] 00:13:51.222 [2024-11-04 14:39:50.233576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.481 [2024-11-04 14:39:50.370887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.481 [2024-11-04 14:39:50.584765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.481 [2024-11-04 14:39:50.584832] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.079 malloc1 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.079 [2024-11-04 14:39:51.155287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:52.079 [2024-11-04 14:39:51.155391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.079 [2024-11-04 14:39:51.155426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.079 [2024-11-04 14:39:51.155441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.079 [2024-11-04 14:39:51.158414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.079 [2024-11-04 14:39:51.158617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:52.079 pt1 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.079 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.339 malloc2 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.339 [2024-11-04 14:39:51.210222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:52.339 [2024-11-04 14:39:51.210504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.339 [2024-11-04 14:39:51.210546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.339 [2024-11-04 14:39:51.210562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.339 [2024-11-04 14:39:51.213356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.339 [2024-11-04 14:39:51.213409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:52.339 pt2 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.339 malloc3 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.339 [2024-11-04 14:39:51.275922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:52.339 [2024-11-04 14:39:51.276046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.339 [2024-11-04 14:39:51.276078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:52.339 [2024-11-04 14:39:51.276092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.339 [2024-11-04 14:39:51.278907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.339 [2024-11-04 14:39:51.279135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:52.339 pt3 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.339 malloc4 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.339 [2024-11-04 14:39:51.331989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:52.339 [2024-11-04 14:39:51.332084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.339 [2024-11-04 14:39:51.332114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:52.339 [2024-11-04 14:39:51.332128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.339 [2024-11-04 14:39:51.334897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.339 [2024-11-04 14:39:51.335127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:52.339 pt4 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.339 [2024-11-04 14:39:51.340060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:52.339 [2024-11-04 14:39:51.342672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:52.339 [2024-11-04 14:39:51.342773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:52.339 [2024-11-04 14:39:51.342862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:52.339 [2024-11-04 14:39:51.343176] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:52.339 [2024-11-04 14:39:51.343194] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:52.339 [2024-11-04 14:39:51.343540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:52.339 [2024-11-04 14:39:51.343788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:52.339 [2024-11-04 14:39:51.343809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:52.339 [2024-11-04 14:39:51.344072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.339 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.340 "name": "raid_bdev1", 00:13:52.340 "uuid": "bfc823ca-efbe-46d2-a6ea-403f75254ac7", 00:13:52.340 "strip_size_kb": 64, 00:13:52.340 "state": "online", 00:13:52.340 "raid_level": "concat", 00:13:52.340 "superblock": true, 00:13:52.340 "num_base_bdevs": 4, 00:13:52.340 "num_base_bdevs_discovered": 4, 00:13:52.340 "num_base_bdevs_operational": 4, 00:13:52.340 "base_bdevs_list": [ 00:13:52.340 { 00:13:52.340 "name": "pt1", 00:13:52.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:52.340 "is_configured": true, 00:13:52.340 "data_offset": 2048, 00:13:52.340 "data_size": 63488 00:13:52.340 }, 00:13:52.340 { 00:13:52.340 "name": "pt2", 00:13:52.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.340 "is_configured": true, 00:13:52.340 "data_offset": 2048, 00:13:52.340 "data_size": 63488 00:13:52.340 }, 00:13:52.340 { 00:13:52.340 "name": "pt3", 00:13:52.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:52.340 "is_configured": true, 00:13:52.340 "data_offset": 2048, 00:13:52.340 "data_size": 63488 00:13:52.340 }, 00:13:52.340 { 00:13:52.340 "name": "pt4", 00:13:52.340 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:52.340 "is_configured": true, 00:13:52.340 "data_offset": 2048, 00:13:52.340 "data_size": 63488 00:13:52.340 } 00:13:52.340 ] 00:13:52.340 }' 00:13:52.340 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.340 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.949 [2024-11-04 14:39:51.852727] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:52.949 "name": "raid_bdev1", 00:13:52.949 "aliases": [ 00:13:52.949 "bfc823ca-efbe-46d2-a6ea-403f75254ac7" 00:13:52.949 ], 00:13:52.949 "product_name": "Raid Volume", 00:13:52.949 "block_size": 512, 00:13:52.949 "num_blocks": 253952, 00:13:52.949 "uuid": "bfc823ca-efbe-46d2-a6ea-403f75254ac7", 00:13:52.949 "assigned_rate_limits": { 00:13:52.949 "rw_ios_per_sec": 0, 00:13:52.949 "rw_mbytes_per_sec": 0, 00:13:52.949 "r_mbytes_per_sec": 0, 00:13:52.949 "w_mbytes_per_sec": 0 00:13:52.949 }, 00:13:52.949 "claimed": false, 00:13:52.949 "zoned": false, 00:13:52.949 "supported_io_types": { 00:13:52.949 "read": true, 00:13:52.949 "write": true, 00:13:52.949 "unmap": true, 00:13:52.949 "flush": true, 00:13:52.949 "reset": true, 00:13:52.949 "nvme_admin": false, 00:13:52.949 "nvme_io": false, 00:13:52.949 "nvme_io_md": false, 00:13:52.949 "write_zeroes": true, 00:13:52.949 "zcopy": false, 00:13:52.949 "get_zone_info": false, 00:13:52.949 "zone_management": false, 00:13:52.949 "zone_append": false, 00:13:52.949 "compare": false, 00:13:52.949 "compare_and_write": false, 00:13:52.949 "abort": false, 00:13:52.949 "seek_hole": false, 00:13:52.949 "seek_data": false, 00:13:52.949 "copy": false, 00:13:52.949 "nvme_iov_md": false 00:13:52.949 }, 00:13:52.949 "memory_domains": [ 00:13:52.949 { 00:13:52.949 "dma_device_id": "system", 00:13:52.949 "dma_device_type": 1 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.949 "dma_device_type": 2 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "dma_device_id": "system", 00:13:52.949 "dma_device_type": 1 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.949 "dma_device_type": 2 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "dma_device_id": "system", 00:13:52.949 "dma_device_type": 1 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.949 "dma_device_type": 2 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "dma_device_id": "system", 00:13:52.949 "dma_device_type": 1 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.949 "dma_device_type": 2 00:13:52.949 } 00:13:52.949 ], 00:13:52.949 "driver_specific": { 00:13:52.949 "raid": { 00:13:52.949 "uuid": "bfc823ca-efbe-46d2-a6ea-403f75254ac7", 00:13:52.949 "strip_size_kb": 64, 00:13:52.949 "state": "online", 00:13:52.949 "raid_level": "concat", 00:13:52.949 "superblock": true, 00:13:52.949 "num_base_bdevs": 4, 00:13:52.949 "num_base_bdevs_discovered": 4, 00:13:52.949 "num_base_bdevs_operational": 4, 00:13:52.949 "base_bdevs_list": [ 00:13:52.949 { 00:13:52.949 "name": "pt1", 00:13:52.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 2048, 00:13:52.949 "data_size": 63488 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "name": "pt2", 00:13:52.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 2048, 00:13:52.949 "data_size": 63488 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "name": "pt3", 00:13:52.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 2048, 00:13:52.949 "data_size": 63488 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "name": "pt4", 00:13:52.949 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 2048, 00:13:52.949 "data_size": 63488 00:13:52.949 } 00:13:52.949 ] 00:13:52.949 } 00:13:52.949 } 00:13:52.949 }' 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:52.949 pt2 00:13:52.949 pt3 00:13:52.949 pt4' 00:13:52.949 14:39:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.949 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.950 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.208 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.208 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.208 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.208 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.209 [2024-11-04 14:39:52.256733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bfc823ca-efbe-46d2-a6ea-403f75254ac7 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bfc823ca-efbe-46d2-a6ea-403f75254ac7 ']' 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.209 [2024-11-04 14:39:52.304384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.209 [2024-11-04 14:39:52.304413] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.209 [2024-11-04 14:39:52.304501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.209 [2024-11-04 14:39:52.304582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.209 [2024-11-04 14:39:52.304602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:53.209 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.468 [2024-11-04 14:39:52.472478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:53.468 [2024-11-04 14:39:52.475265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:53.468 [2024-11-04 14:39:52.475431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:53.468 [2024-11-04 14:39:52.475549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:53.468 [2024-11-04 14:39:52.475662] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:53.468 [2024-11-04 14:39:52.475958] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:53.468 [2024-11-04 14:39:52.476068] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:53.468 [2024-11-04 14:39:52.476157] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:53.468 [2024-11-04 14:39:52.476227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.468 [2024-11-04 14:39:52.476376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:53.468 request: 00:13:53.468 { 00:13:53.468 "name": "raid_bdev1", 00:13:53.468 "raid_level": "concat", 00:13:53.468 "base_bdevs": [ 00:13:53.468 "malloc1", 00:13:53.468 "malloc2", 00:13:53.468 "malloc3", 00:13:53.468 "malloc4" 00:13:53.468 ], 00:13:53.468 "strip_size_kb": 64, 00:13:53.468 "superblock": false, 00:13:53.468 "method": "bdev_raid_create", 00:13:53.468 "req_id": 1 00:13:53.468 } 00:13:53.468 Got JSON-RPC error response 00:13:53.468 response: 00:13:53.468 { 00:13:53.468 "code": -17, 00:13:53.468 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:53.468 } 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.468 [2024-11-04 14:39:52.532792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:53.468 [2024-11-04 14:39:52.532973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.468 [2024-11-04 14:39:52.533008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:53.468 [2024-11-04 14:39:52.533025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.468 [2024-11-04 14:39:52.536007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.468 [2024-11-04 14:39:52.536187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:53.468 [2024-11-04 14:39:52.536288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:53.468 [2024-11-04 14:39:52.536380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:53.468 pt1 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.468 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.469 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.727 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.727 "name": "raid_bdev1", 00:13:53.727 "uuid": "bfc823ca-efbe-46d2-a6ea-403f75254ac7", 00:13:53.727 "strip_size_kb": 64, 00:13:53.727 "state": "configuring", 00:13:53.727 "raid_level": "concat", 00:13:53.727 "superblock": true, 00:13:53.727 "num_base_bdevs": 4, 00:13:53.727 "num_base_bdevs_discovered": 1, 00:13:53.727 "num_base_bdevs_operational": 4, 00:13:53.727 "base_bdevs_list": [ 00:13:53.727 { 00:13:53.727 "name": "pt1", 00:13:53.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:53.727 "is_configured": true, 00:13:53.727 "data_offset": 2048, 00:13:53.727 "data_size": 63488 00:13:53.727 }, 00:13:53.727 { 00:13:53.727 "name": null, 00:13:53.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:53.727 "is_configured": false, 00:13:53.727 "data_offset": 2048, 00:13:53.727 "data_size": 63488 00:13:53.727 }, 00:13:53.727 { 00:13:53.727 "name": null, 00:13:53.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:53.727 "is_configured": false, 00:13:53.727 "data_offset": 2048, 00:13:53.727 "data_size": 63488 00:13:53.727 }, 00:13:53.727 { 00:13:53.727 "name": null, 00:13:53.727 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:53.727 "is_configured": false, 00:13:53.727 "data_offset": 2048, 00:13:53.727 "data_size": 63488 00:13:53.727 } 00:13:53.727 ] 00:13:53.727 }' 00:13:53.727 14:39:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.727 14:39:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.986 [2024-11-04 14:39:53.077002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:53.986 [2024-11-04 14:39:53.077108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:53.986 [2024-11-04 14:39:53.077137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:53.986 [2024-11-04 14:39:53.077154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:53.986 [2024-11-04 14:39:53.077709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:53.986 [2024-11-04 14:39:53.077745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:53.986 [2024-11-04 14:39:53.077839] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:53.986 [2024-11-04 14:39:53.077905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:53.986 pt2 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.986 [2024-11-04 14:39:53.085008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:53.986 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.987 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.987 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.987 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.987 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.987 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.987 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.987 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.987 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.987 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.245 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.245 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.245 "name": "raid_bdev1", 00:13:54.245 "uuid": "bfc823ca-efbe-46d2-a6ea-403f75254ac7", 00:13:54.245 "strip_size_kb": 64, 00:13:54.245 "state": "configuring", 00:13:54.245 "raid_level": "concat", 00:13:54.245 "superblock": true, 00:13:54.245 "num_base_bdevs": 4, 00:13:54.245 "num_base_bdevs_discovered": 1, 00:13:54.245 "num_base_bdevs_operational": 4, 00:13:54.245 "base_bdevs_list": [ 00:13:54.245 { 00:13:54.245 "name": "pt1", 00:13:54.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.245 "is_configured": true, 00:13:54.246 "data_offset": 2048, 00:13:54.246 "data_size": 63488 00:13:54.246 }, 00:13:54.246 { 00:13:54.246 "name": null, 00:13:54.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.246 "is_configured": false, 00:13:54.246 "data_offset": 0, 00:13:54.246 "data_size": 63488 00:13:54.246 }, 00:13:54.246 { 00:13:54.246 "name": null, 00:13:54.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.246 "is_configured": false, 00:13:54.246 "data_offset": 2048, 00:13:54.246 "data_size": 63488 00:13:54.246 }, 00:13:54.246 { 00:13:54.246 "name": null, 00:13:54.246 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:54.246 "is_configured": false, 00:13:54.246 "data_offset": 2048, 00:13:54.246 "data_size": 63488 00:13:54.246 } 00:13:54.246 ] 00:13:54.246 }' 00:13:54.246 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.246 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.813 [2024-11-04 14:39:53.641225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:54.813 [2024-11-04 14:39:53.641342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.813 [2024-11-04 14:39:53.641370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:54.813 [2024-11-04 14:39:53.641384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.813 [2024-11-04 14:39:53.641902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.813 [2024-11-04 14:39:53.641968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:54.813 [2024-11-04 14:39:53.642076] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:54.813 [2024-11-04 14:39:53.642107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:54.813 pt2 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.813 [2024-11-04 14:39:53.653204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:54.813 [2024-11-04 14:39:53.653257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.813 [2024-11-04 14:39:53.653334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:54.813 [2024-11-04 14:39:53.653349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.813 [2024-11-04 14:39:53.653764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.813 [2024-11-04 14:39:53.653793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:54.813 [2024-11-04 14:39:53.653882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:54.813 [2024-11-04 14:39:53.653922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:54.813 pt3 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.813 [2024-11-04 14:39:53.661168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:54.813 [2024-11-04 14:39:53.661237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.813 [2024-11-04 14:39:53.661264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:54.813 [2024-11-04 14:39:53.661277] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.813 [2024-11-04 14:39:53.661739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.813 [2024-11-04 14:39:53.661769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:54.813 [2024-11-04 14:39:53.661845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:54.813 [2024-11-04 14:39:53.661872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:54.813 [2024-11-04 14:39:53.662058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:54.813 [2024-11-04 14:39:53.662075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:54.813 [2024-11-04 14:39:53.662374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:54.813 [2024-11-04 14:39:53.662566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:54.813 [2024-11-04 14:39:53.662588] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:54.813 [2024-11-04 14:39:53.662739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.813 pt4 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.813 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.813 "name": "raid_bdev1", 00:13:54.813 "uuid": "bfc823ca-efbe-46d2-a6ea-403f75254ac7", 00:13:54.813 "strip_size_kb": 64, 00:13:54.813 "state": "online", 00:13:54.813 "raid_level": "concat", 00:13:54.813 "superblock": true, 00:13:54.813 "num_base_bdevs": 4, 00:13:54.813 "num_base_bdevs_discovered": 4, 00:13:54.813 "num_base_bdevs_operational": 4, 00:13:54.813 "base_bdevs_list": [ 00:13:54.813 { 00:13:54.813 "name": "pt1", 00:13:54.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:54.813 "is_configured": true, 00:13:54.813 "data_offset": 2048, 00:13:54.814 "data_size": 63488 00:13:54.814 }, 00:13:54.814 { 00:13:54.814 "name": "pt2", 00:13:54.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:54.814 "is_configured": true, 00:13:54.814 "data_offset": 2048, 00:13:54.814 "data_size": 63488 00:13:54.814 }, 00:13:54.814 { 00:13:54.814 "name": "pt3", 00:13:54.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:54.814 "is_configured": true, 00:13:54.814 "data_offset": 2048, 00:13:54.814 "data_size": 63488 00:13:54.814 }, 00:13:54.814 { 00:13:54.814 "name": "pt4", 00:13:54.814 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:54.814 "is_configured": true, 00:13:54.814 "data_offset": 2048, 00:13:54.814 "data_size": 63488 00:13:54.814 } 00:13:54.814 ] 00:13:54.814 }' 00:13:54.814 14:39:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.814 14:39:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.381 [2024-11-04 14:39:54.205776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.381 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.381 "name": "raid_bdev1", 00:13:55.381 "aliases": [ 00:13:55.381 "bfc823ca-efbe-46d2-a6ea-403f75254ac7" 00:13:55.381 ], 00:13:55.381 "product_name": "Raid Volume", 00:13:55.381 "block_size": 512, 00:13:55.381 "num_blocks": 253952, 00:13:55.381 "uuid": "bfc823ca-efbe-46d2-a6ea-403f75254ac7", 00:13:55.381 "assigned_rate_limits": { 00:13:55.381 "rw_ios_per_sec": 0, 00:13:55.381 "rw_mbytes_per_sec": 0, 00:13:55.381 "r_mbytes_per_sec": 0, 00:13:55.381 "w_mbytes_per_sec": 0 00:13:55.381 }, 00:13:55.381 "claimed": false, 00:13:55.381 "zoned": false, 00:13:55.381 "supported_io_types": { 00:13:55.381 "read": true, 00:13:55.381 "write": true, 00:13:55.381 "unmap": true, 00:13:55.381 "flush": true, 00:13:55.381 "reset": true, 00:13:55.381 "nvme_admin": false, 00:13:55.381 "nvme_io": false, 00:13:55.381 "nvme_io_md": false, 00:13:55.381 "write_zeroes": true, 00:13:55.381 "zcopy": false, 00:13:55.381 "get_zone_info": false, 00:13:55.381 "zone_management": false, 00:13:55.381 "zone_append": false, 00:13:55.381 "compare": false, 00:13:55.381 "compare_and_write": false, 00:13:55.381 "abort": false, 00:13:55.381 "seek_hole": false, 00:13:55.381 "seek_data": false, 00:13:55.381 "copy": false, 00:13:55.381 "nvme_iov_md": false 00:13:55.381 }, 00:13:55.381 "memory_domains": [ 00:13:55.381 { 00:13:55.381 "dma_device_id": "system", 00:13:55.381 "dma_device_type": 1 00:13:55.381 }, 00:13:55.381 { 00:13:55.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.381 "dma_device_type": 2 00:13:55.381 }, 00:13:55.381 { 00:13:55.381 "dma_device_id": "system", 00:13:55.381 "dma_device_type": 1 00:13:55.381 }, 00:13:55.381 { 00:13:55.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.381 "dma_device_type": 2 00:13:55.381 }, 00:13:55.381 { 00:13:55.381 "dma_device_id": "system", 00:13:55.381 "dma_device_type": 1 00:13:55.381 }, 00:13:55.381 { 00:13:55.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.381 "dma_device_type": 2 00:13:55.381 }, 00:13:55.381 { 00:13:55.381 "dma_device_id": "system", 00:13:55.381 "dma_device_type": 1 00:13:55.381 }, 00:13:55.381 { 00:13:55.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.381 "dma_device_type": 2 00:13:55.381 } 00:13:55.381 ], 00:13:55.381 "driver_specific": { 00:13:55.381 "raid": { 00:13:55.381 "uuid": "bfc823ca-efbe-46d2-a6ea-403f75254ac7", 00:13:55.381 "strip_size_kb": 64, 00:13:55.381 "state": "online", 00:13:55.381 "raid_level": "concat", 00:13:55.381 "superblock": true, 00:13:55.381 "num_base_bdevs": 4, 00:13:55.381 "num_base_bdevs_discovered": 4, 00:13:55.381 "num_base_bdevs_operational": 4, 00:13:55.381 "base_bdevs_list": [ 00:13:55.382 { 00:13:55.382 "name": "pt1", 00:13:55.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:55.382 "is_configured": true, 00:13:55.382 "data_offset": 2048, 00:13:55.382 "data_size": 63488 00:13:55.382 }, 00:13:55.382 { 00:13:55.382 "name": "pt2", 00:13:55.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:55.382 "is_configured": true, 00:13:55.382 "data_offset": 2048, 00:13:55.382 "data_size": 63488 00:13:55.382 }, 00:13:55.382 { 00:13:55.382 "name": "pt3", 00:13:55.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:55.382 "is_configured": true, 00:13:55.382 "data_offset": 2048, 00:13:55.382 "data_size": 63488 00:13:55.382 }, 00:13:55.382 { 00:13:55.382 "name": "pt4", 00:13:55.382 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:55.382 "is_configured": true, 00:13:55.382 "data_offset": 2048, 00:13:55.382 "data_size": 63488 00:13:55.382 } 00:13:55.382 ] 00:13:55.382 } 00:13:55.382 } 00:13:55.382 }' 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:55.382 pt2 00:13:55.382 pt3 00:13:55.382 pt4' 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.382 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:55.640 [2024-11-04 14:39:54.573815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bfc823ca-efbe-46d2-a6ea-403f75254ac7 '!=' bfc823ca-efbe-46d2-a6ea-403f75254ac7 ']' 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:55.640 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72748 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72748 ']' 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72748 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72748 00:13:55.641 killing process with pid 72748 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72748' 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72748 00:13:55.641 [2024-11-04 14:39:54.643050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.641 [2024-11-04 14:39:54.643144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.641 14:39:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72748 00:13:55.641 [2024-11-04 14:39:54.643237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.641 [2024-11-04 14:39:54.643253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:55.899 [2024-11-04 14:39:55.002490] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.282 14:39:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:57.282 00:13:57.282 real 0m6.047s 00:13:57.282 user 0m9.136s 00:13:57.282 sys 0m0.911s 00:13:57.282 14:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:57.282 14:39:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.282 ************************************ 00:13:57.282 END TEST raid_superblock_test 00:13:57.282 ************************************ 00:13:57.282 14:39:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:57.282 14:39:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:57.282 14:39:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:57.282 14:39:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.282 ************************************ 00:13:57.282 START TEST raid_read_error_test 00:13:57.282 ************************************ 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4seI2QRBXV 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73017 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73017 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 73017 ']' 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:57.282 14:39:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.282 [2024-11-04 14:39:56.195660] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:13:57.282 [2024-11-04 14:39:56.195835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73017 ] 00:13:57.282 [2024-11-04 14:39:56.377039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.541 [2024-11-04 14:39:56.508723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.800 [2024-11-04 14:39:56.713728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.800 [2024-11-04 14:39:56.713783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 BaseBdev1_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 true 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 [2024-11-04 14:39:57.275772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:58.368 [2024-11-04 14:39:57.275851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.368 [2024-11-04 14:39:57.275880] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:58.368 [2024-11-04 14:39:57.275897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.368 [2024-11-04 14:39:57.278726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.368 [2024-11-04 14:39:57.278988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:58.368 BaseBdev1 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 BaseBdev2_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 true 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 [2024-11-04 14:39:57.332044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:58.368 [2024-11-04 14:39:57.332124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.368 [2024-11-04 14:39:57.332149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:58.368 [2024-11-04 14:39:57.332167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.368 [2024-11-04 14:39:57.335075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.368 [2024-11-04 14:39:57.335122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:58.368 BaseBdev2 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 BaseBdev3_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 true 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 [2024-11-04 14:39:57.406384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:58.368 [2024-11-04 14:39:57.406595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.368 [2024-11-04 14:39:57.406633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:58.368 [2024-11-04 14:39:57.406653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.368 [2024-11-04 14:39:57.409500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.368 [2024-11-04 14:39:57.409549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:58.368 BaseBdev3 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 BaseBdev4_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 true 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 [2024-11-04 14:39:57.467707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:58.368 [2024-11-04 14:39:57.467907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.368 [2024-11-04 14:39:57.467966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:58.368 [2024-11-04 14:39:57.467988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.368 [2024-11-04 14:39:57.470798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.368 [2024-11-04 14:39:57.470853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:58.368 BaseBdev4 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.368 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.368 [2024-11-04 14:39:57.479820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.369 [2024-11-04 14:39:57.482240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.369 [2024-11-04 14:39:57.482490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:58.369 [2024-11-04 14:39:57.482611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:58.369 [2024-11-04 14:39:57.482908] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:58.369 [2024-11-04 14:39:57.482945] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:58.369 [2024-11-04 14:39:57.483263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:58.369 [2024-11-04 14:39:57.483491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:58.369 [2024-11-04 14:39:57.483515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:58.369 [2024-11-04 14:39:57.483757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.369 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.627 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.627 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.627 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.628 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.628 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.628 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.628 "name": "raid_bdev1", 00:13:58.628 "uuid": "23eea766-7cae-4429-b7e1-1a1b1314bc46", 00:13:58.628 "strip_size_kb": 64, 00:13:58.628 "state": "online", 00:13:58.628 "raid_level": "concat", 00:13:58.628 "superblock": true, 00:13:58.628 "num_base_bdevs": 4, 00:13:58.628 "num_base_bdevs_discovered": 4, 00:13:58.628 "num_base_bdevs_operational": 4, 00:13:58.628 "base_bdevs_list": [ 00:13:58.628 { 00:13:58.628 "name": "BaseBdev1", 00:13:58.628 "uuid": "5fbdec0a-5ab7-5437-8aff-9266f8ba5e78", 00:13:58.628 "is_configured": true, 00:13:58.628 "data_offset": 2048, 00:13:58.628 "data_size": 63488 00:13:58.628 }, 00:13:58.628 { 00:13:58.628 "name": "BaseBdev2", 00:13:58.628 "uuid": "6aed3e3d-fdb1-5950-bb34-9992b11b9c63", 00:13:58.628 "is_configured": true, 00:13:58.628 "data_offset": 2048, 00:13:58.628 "data_size": 63488 00:13:58.628 }, 00:13:58.628 { 00:13:58.628 "name": "BaseBdev3", 00:13:58.628 "uuid": "25b1fce1-5eab-52a5-a422-5529084babc9", 00:13:58.628 "is_configured": true, 00:13:58.628 "data_offset": 2048, 00:13:58.628 "data_size": 63488 00:13:58.628 }, 00:13:58.628 { 00:13:58.628 "name": "BaseBdev4", 00:13:58.628 "uuid": "f2e9e5d4-9632-59e5-820c-89bc25977c75", 00:13:58.628 "is_configured": true, 00:13:58.628 "data_offset": 2048, 00:13:58.628 "data_size": 63488 00:13:58.628 } 00:13:58.628 ] 00:13:58.628 }' 00:13:58.628 14:39:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.628 14:39:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.886 14:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:58.886 14:39:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:59.145 [2024-11-04 14:39:58.141489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.081 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.081 "name": "raid_bdev1", 00:14:00.081 "uuid": "23eea766-7cae-4429-b7e1-1a1b1314bc46", 00:14:00.081 "strip_size_kb": 64, 00:14:00.081 "state": "online", 00:14:00.081 "raid_level": "concat", 00:14:00.081 "superblock": true, 00:14:00.081 "num_base_bdevs": 4, 00:14:00.081 "num_base_bdevs_discovered": 4, 00:14:00.081 "num_base_bdevs_operational": 4, 00:14:00.081 "base_bdevs_list": [ 00:14:00.082 { 00:14:00.082 "name": "BaseBdev1", 00:14:00.082 "uuid": "5fbdec0a-5ab7-5437-8aff-9266f8ba5e78", 00:14:00.082 "is_configured": true, 00:14:00.082 "data_offset": 2048, 00:14:00.082 "data_size": 63488 00:14:00.082 }, 00:14:00.082 { 00:14:00.082 "name": "BaseBdev2", 00:14:00.082 "uuid": "6aed3e3d-fdb1-5950-bb34-9992b11b9c63", 00:14:00.082 "is_configured": true, 00:14:00.082 "data_offset": 2048, 00:14:00.082 "data_size": 63488 00:14:00.082 }, 00:14:00.082 { 00:14:00.082 "name": "BaseBdev3", 00:14:00.082 "uuid": "25b1fce1-5eab-52a5-a422-5529084babc9", 00:14:00.082 "is_configured": true, 00:14:00.082 "data_offset": 2048, 00:14:00.082 "data_size": 63488 00:14:00.082 }, 00:14:00.082 { 00:14:00.082 "name": "BaseBdev4", 00:14:00.082 "uuid": "f2e9e5d4-9632-59e5-820c-89bc25977c75", 00:14:00.082 "is_configured": true, 00:14:00.082 "data_offset": 2048, 00:14:00.082 "data_size": 63488 00:14:00.082 } 00:14:00.082 ] 00:14:00.082 }' 00:14:00.082 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.082 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.650 [2024-11-04 14:39:59.585290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:00.650 [2024-11-04 14:39:59.585530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.650 [2024-11-04 14:39:59.589278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.650 [2024-11-04 14:39:59.589458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.650 [2024-11-04 14:39:59.589519] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.650 [2024-11-04 14:39:59.589541] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:00.650 { 00:14:00.650 "results": [ 00:14:00.650 { 00:14:00.650 "job": "raid_bdev1", 00:14:00.650 "core_mask": "0x1", 00:14:00.650 "workload": "randrw", 00:14:00.650 "percentage": 50, 00:14:00.650 "status": "finished", 00:14:00.650 "queue_depth": 1, 00:14:00.650 "io_size": 131072, 00:14:00.650 "runtime": 1.441286, 00:14:00.650 "iops": 11004.75547531857, 00:14:00.650 "mibps": 1375.5944344148213, 00:14:00.650 "io_failed": 1, 00:14:00.650 "io_timeout": 0, 00:14:00.650 "avg_latency_us": 126.75420043328249, 00:14:00.650 "min_latency_us": 37.46909090909091, 00:14:00.650 "max_latency_us": 1906.5018181818182 00:14:00.650 } 00:14:00.650 ], 00:14:00.650 "core_count": 1 00:14:00.650 } 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73017 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 73017 ']' 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 73017 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73017 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:00.650 killing process with pid 73017 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73017' 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 73017 00:14:00.650 [2024-11-04 14:39:59.629664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.650 14:39:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 73017 00:14:00.909 [2024-11-04 14:39:59.901657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.844 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4seI2QRBXV 00:14:01.844 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:01.844 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:02.163 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:14:02.163 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:02.163 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:02.163 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:02.163 14:40:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:14:02.163 00:14:02.163 real 0m4.896s 00:14:02.163 user 0m6.085s 00:14:02.163 sys 0m0.635s 00:14:02.163 ************************************ 00:14:02.163 END TEST raid_read_error_test 00:14:02.163 ************************************ 00:14:02.163 14:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:02.163 14:40:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.163 14:40:01 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:14:02.163 14:40:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:02.163 14:40:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:02.163 14:40:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:02.163 ************************************ 00:14:02.163 START TEST raid_write_error_test 00:14:02.163 ************************************ 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:02.163 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.N7SHuV48kz 00:14:02.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73163 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73163 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73163 ']' 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:02.164 14:40:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.164 [2024-11-04 14:40:01.145302] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:14:02.164 [2024-11-04 14:40:01.145632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73163 ] 00:14:02.423 [2024-11-04 14:40:01.334664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.423 [2024-11-04 14:40:01.484354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.683 [2024-11-04 14:40:01.693743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.683 [2024-11-04 14:40:01.693779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 BaseBdev1_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 true 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 [2024-11-04 14:40:02.155664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:03.252 [2024-11-04 14:40:02.155741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.252 [2024-11-04 14:40:02.155771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:03.252 [2024-11-04 14:40:02.155788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.252 [2024-11-04 14:40:02.158778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.252 [2024-11-04 14:40:02.159014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:03.252 BaseBdev1 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 BaseBdev2_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 true 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 [2024-11-04 14:40:02.217883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:03.252 [2024-11-04 14:40:02.217995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.252 [2024-11-04 14:40:02.218034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:03.252 [2024-11-04 14:40:02.218051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.252 [2024-11-04 14:40:02.220850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.252 [2024-11-04 14:40:02.220928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:03.252 BaseBdev2 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 BaseBdev3_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 true 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 [2024-11-04 14:40:02.288884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:03.252 [2024-11-04 14:40:02.288967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.252 [2024-11-04 14:40:02.288995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:03.252 [2024-11-04 14:40:02.289013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.252 [2024-11-04 14:40:02.291838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.252 [2024-11-04 14:40:02.292070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:03.252 BaseBdev3 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 BaseBdev4_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.252 true 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.252 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.253 [2024-11-04 14:40:02.346187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:03.253 [2024-11-04 14:40:02.346255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.253 [2024-11-04 14:40:02.346284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:03.253 [2024-11-04 14:40:02.346307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.253 [2024-11-04 14:40:02.349141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.253 [2024-11-04 14:40:02.349204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:03.253 BaseBdev4 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.253 [2024-11-04 14:40:02.354348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.253 [2024-11-04 14:40:02.356821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.253 [2024-11-04 14:40:02.356946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:03.253 [2024-11-04 14:40:02.357053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:03.253 [2024-11-04 14:40:02.357348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:03.253 [2024-11-04 14:40:02.357376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:03.253 [2024-11-04 14:40:02.357693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:03.253 [2024-11-04 14:40:02.357902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:03.253 [2024-11-04 14:40:02.357920] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:03.253 [2024-11-04 14:40:02.358146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.253 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.513 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.513 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.513 "name": "raid_bdev1", 00:14:03.513 "uuid": "ef20c557-d172-4889-acc7-e1d7838a506b", 00:14:03.513 "strip_size_kb": 64, 00:14:03.513 "state": "online", 00:14:03.513 "raid_level": "concat", 00:14:03.513 "superblock": true, 00:14:03.513 "num_base_bdevs": 4, 00:14:03.513 "num_base_bdevs_discovered": 4, 00:14:03.513 "num_base_bdevs_operational": 4, 00:14:03.513 "base_bdevs_list": [ 00:14:03.513 { 00:14:03.513 "name": "BaseBdev1", 00:14:03.513 "uuid": "2c18f56d-4e7c-5f6b-9c80-bc9b3f27b704", 00:14:03.513 "is_configured": true, 00:14:03.513 "data_offset": 2048, 00:14:03.513 "data_size": 63488 00:14:03.513 }, 00:14:03.513 { 00:14:03.513 "name": "BaseBdev2", 00:14:03.513 "uuid": "d855e6b3-8609-5420-b007-091eb61ea6a1", 00:14:03.513 "is_configured": true, 00:14:03.513 "data_offset": 2048, 00:14:03.513 "data_size": 63488 00:14:03.513 }, 00:14:03.513 { 00:14:03.513 "name": "BaseBdev3", 00:14:03.513 "uuid": "43773f81-11ec-55e7-90bb-381389651304", 00:14:03.513 "is_configured": true, 00:14:03.513 "data_offset": 2048, 00:14:03.513 "data_size": 63488 00:14:03.513 }, 00:14:03.513 { 00:14:03.513 "name": "BaseBdev4", 00:14:03.513 "uuid": "5c62cad2-68b1-5abb-ad93-8db998d201e5", 00:14:03.513 "is_configured": true, 00:14:03.513 "data_offset": 2048, 00:14:03.513 "data_size": 63488 00:14:03.513 } 00:14:03.513 ] 00:14:03.513 }' 00:14:03.513 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.513 14:40:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.771 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:03.771 14:40:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:04.030 [2024-11-04 14:40:03.020012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.966 "name": "raid_bdev1", 00:14:04.966 "uuid": "ef20c557-d172-4889-acc7-e1d7838a506b", 00:14:04.966 "strip_size_kb": 64, 00:14:04.966 "state": "online", 00:14:04.966 "raid_level": "concat", 00:14:04.966 "superblock": true, 00:14:04.966 "num_base_bdevs": 4, 00:14:04.966 "num_base_bdevs_discovered": 4, 00:14:04.966 "num_base_bdevs_operational": 4, 00:14:04.966 "base_bdevs_list": [ 00:14:04.966 { 00:14:04.966 "name": "BaseBdev1", 00:14:04.966 "uuid": "2c18f56d-4e7c-5f6b-9c80-bc9b3f27b704", 00:14:04.966 "is_configured": true, 00:14:04.966 "data_offset": 2048, 00:14:04.966 "data_size": 63488 00:14:04.966 }, 00:14:04.966 { 00:14:04.966 "name": "BaseBdev2", 00:14:04.966 "uuid": "d855e6b3-8609-5420-b007-091eb61ea6a1", 00:14:04.966 "is_configured": true, 00:14:04.966 "data_offset": 2048, 00:14:04.966 "data_size": 63488 00:14:04.966 }, 00:14:04.966 { 00:14:04.966 "name": "BaseBdev3", 00:14:04.966 "uuid": "43773f81-11ec-55e7-90bb-381389651304", 00:14:04.966 "is_configured": true, 00:14:04.966 "data_offset": 2048, 00:14:04.966 "data_size": 63488 00:14:04.966 }, 00:14:04.966 { 00:14:04.966 "name": "BaseBdev4", 00:14:04.966 "uuid": "5c62cad2-68b1-5abb-ad93-8db998d201e5", 00:14:04.966 "is_configured": true, 00:14:04.966 "data_offset": 2048, 00:14:04.966 "data_size": 63488 00:14:04.966 } 00:14:04.966 ] 00:14:04.966 }' 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.966 14:40:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.533 [2024-11-04 14:40:04.436163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.533 [2024-11-04 14:40:04.436210] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.533 [2024-11-04 14:40:04.439953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.533 [2024-11-04 14:40:04.440066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.533 [2024-11-04 14:40:04.440141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.533 [2024-11-04 14:40:04.440168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:05.533 { 00:14:05.533 "results": [ 00:14:05.533 { 00:14:05.533 "job": "raid_bdev1", 00:14:05.533 "core_mask": "0x1", 00:14:05.533 "workload": "randrw", 00:14:05.533 "percentage": 50, 00:14:05.533 "status": "finished", 00:14:05.533 "queue_depth": 1, 00:14:05.533 "io_size": 131072, 00:14:05.533 "runtime": 1.413381, 00:14:05.533 "iops": 10174.185163094735, 00:14:05.533 "mibps": 1271.7731453868419, 00:14:05.533 "io_failed": 1, 00:14:05.533 "io_timeout": 0, 00:14:05.533 "avg_latency_us": 137.40127719023207, 00:14:05.533 "min_latency_us": 37.70181818181818, 00:14:05.533 "max_latency_us": 1869.2654545454545 00:14:05.533 } 00:14:05.533 ], 00:14:05.533 "core_count": 1 00:14:05.533 } 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73163 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73163 ']' 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73163 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73163 00:14:05.533 killing process with pid 73163 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73163' 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73163 00:14:05.533 [2024-11-04 14:40:04.474735] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.533 14:40:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73163 00:14:05.791 [2024-11-04 14:40:04.779194] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.202 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.N7SHuV48kz 00:14:07.202 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:07.202 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:07.202 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:07.202 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:07.202 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:07.202 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:07.202 14:40:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:07.202 00:14:07.202 real 0m4.903s 00:14:07.202 user 0m6.014s 00:14:07.202 sys 0m0.631s 00:14:07.202 ************************************ 00:14:07.202 END TEST raid_write_error_test 00:14:07.202 ************************************ 00:14:07.202 14:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.202 14:40:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.202 14:40:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:07.202 14:40:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:14:07.202 14:40:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:07.202 14:40:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.202 14:40:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:07.202 ************************************ 00:14:07.202 START TEST raid_state_function_test 00:14:07.202 ************************************ 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:07.202 Process raid pid: 73312 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73312 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73312' 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73312 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73312 ']' 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:07.202 14:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.203 14:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:07.203 14:40:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.203 [2024-11-04 14:40:06.103892] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:14:07.203 [2024-11-04 14:40:06.104365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:07.203 [2024-11-04 14:40:06.297966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.462 [2024-11-04 14:40:06.455380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.720 [2024-11-04 14:40:06.681897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.720 [2024-11-04 14:40:06.682208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.985 [2024-11-04 14:40:07.091732] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.985 [2024-11-04 14:40:07.092064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.985 [2024-11-04 14:40:07.092107] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.985 [2024-11-04 14:40:07.092139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.985 [2024-11-04 14:40:07.092154] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:07.985 [2024-11-04 14:40:07.092169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:07.985 [2024-11-04 14:40:07.092179] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:07.985 [2024-11-04 14:40:07.092193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.985 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.258 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.258 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.258 "name": "Existed_Raid", 00:14:08.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.258 "strip_size_kb": 0, 00:14:08.258 "state": "configuring", 00:14:08.258 "raid_level": "raid1", 00:14:08.258 "superblock": false, 00:14:08.258 "num_base_bdevs": 4, 00:14:08.258 "num_base_bdevs_discovered": 0, 00:14:08.258 "num_base_bdevs_operational": 4, 00:14:08.258 "base_bdevs_list": [ 00:14:08.258 { 00:14:08.258 "name": "BaseBdev1", 00:14:08.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.258 "is_configured": false, 00:14:08.258 "data_offset": 0, 00:14:08.258 "data_size": 0 00:14:08.258 }, 00:14:08.258 { 00:14:08.258 "name": "BaseBdev2", 00:14:08.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.258 "is_configured": false, 00:14:08.258 "data_offset": 0, 00:14:08.258 "data_size": 0 00:14:08.258 }, 00:14:08.258 { 00:14:08.258 "name": "BaseBdev3", 00:14:08.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.258 "is_configured": false, 00:14:08.258 "data_offset": 0, 00:14:08.258 "data_size": 0 00:14:08.258 }, 00:14:08.258 { 00:14:08.258 "name": "BaseBdev4", 00:14:08.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.258 "is_configured": false, 00:14:08.258 "data_offset": 0, 00:14:08.258 "data_size": 0 00:14:08.258 } 00:14:08.258 ] 00:14:08.258 }' 00:14:08.258 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.258 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.517 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:08.517 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.517 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.517 [2024-11-04 14:40:07.627860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.517 [2024-11-04 14:40:07.628131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:08.517 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.517 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:08.517 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.517 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.517 [2024-11-04 14:40:07.635813] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.517 [2024-11-04 14:40:07.636000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.517 [2024-11-04 14:40:07.636026] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.517 [2024-11-04 14:40:07.636044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.517 [2024-11-04 14:40:07.636054] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.517 [2024-11-04 14:40:07.636067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.517 [2024-11-04 14:40:07.636077] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:08.517 [2024-11-04 14:40:07.636090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.776 [2024-11-04 14:40:07.681382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.776 BaseBdev1 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.776 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.776 [ 00:14:08.776 { 00:14:08.776 "name": "BaseBdev1", 00:14:08.776 "aliases": [ 00:14:08.776 "f6e64f31-60bd-4e5a-853b-b28955e15aca" 00:14:08.776 ], 00:14:08.776 "product_name": "Malloc disk", 00:14:08.776 "block_size": 512, 00:14:08.776 "num_blocks": 65536, 00:14:08.776 "uuid": "f6e64f31-60bd-4e5a-853b-b28955e15aca", 00:14:08.776 "assigned_rate_limits": { 00:14:08.776 "rw_ios_per_sec": 0, 00:14:08.776 "rw_mbytes_per_sec": 0, 00:14:08.776 "r_mbytes_per_sec": 0, 00:14:08.776 "w_mbytes_per_sec": 0 00:14:08.776 }, 00:14:08.776 "claimed": true, 00:14:08.776 "claim_type": "exclusive_write", 00:14:08.776 "zoned": false, 00:14:08.776 "supported_io_types": { 00:14:08.776 "read": true, 00:14:08.776 "write": true, 00:14:08.776 "unmap": true, 00:14:08.776 "flush": true, 00:14:08.776 "reset": true, 00:14:08.776 "nvme_admin": false, 00:14:08.776 "nvme_io": false, 00:14:08.776 "nvme_io_md": false, 00:14:08.776 "write_zeroes": true, 00:14:08.776 "zcopy": true, 00:14:08.776 "get_zone_info": false, 00:14:08.776 "zone_management": false, 00:14:08.776 "zone_append": false, 00:14:08.776 "compare": false, 00:14:08.776 "compare_and_write": false, 00:14:08.776 "abort": true, 00:14:08.776 "seek_hole": false, 00:14:08.776 "seek_data": false, 00:14:08.777 "copy": true, 00:14:08.777 "nvme_iov_md": false 00:14:08.777 }, 00:14:08.777 "memory_domains": [ 00:14:08.777 { 00:14:08.777 "dma_device_id": "system", 00:14:08.777 "dma_device_type": 1 00:14:08.777 }, 00:14:08.777 { 00:14:08.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.777 "dma_device_type": 2 00:14:08.777 } 00:14:08.777 ], 00:14:08.777 "driver_specific": {} 00:14:08.777 } 00:14:08.777 ] 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.777 "name": "Existed_Raid", 00:14:08.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.777 "strip_size_kb": 0, 00:14:08.777 "state": "configuring", 00:14:08.777 "raid_level": "raid1", 00:14:08.777 "superblock": false, 00:14:08.777 "num_base_bdevs": 4, 00:14:08.777 "num_base_bdevs_discovered": 1, 00:14:08.777 "num_base_bdevs_operational": 4, 00:14:08.777 "base_bdevs_list": [ 00:14:08.777 { 00:14:08.777 "name": "BaseBdev1", 00:14:08.777 "uuid": "f6e64f31-60bd-4e5a-853b-b28955e15aca", 00:14:08.777 "is_configured": true, 00:14:08.777 "data_offset": 0, 00:14:08.777 "data_size": 65536 00:14:08.777 }, 00:14:08.777 { 00:14:08.777 "name": "BaseBdev2", 00:14:08.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.777 "is_configured": false, 00:14:08.777 "data_offset": 0, 00:14:08.777 "data_size": 0 00:14:08.777 }, 00:14:08.777 { 00:14:08.777 "name": "BaseBdev3", 00:14:08.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.777 "is_configured": false, 00:14:08.777 "data_offset": 0, 00:14:08.777 "data_size": 0 00:14:08.777 }, 00:14:08.777 { 00:14:08.777 "name": "BaseBdev4", 00:14:08.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.777 "is_configured": false, 00:14:08.777 "data_offset": 0, 00:14:08.777 "data_size": 0 00:14:08.777 } 00:14:08.777 ] 00:14:08.777 }' 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.777 14:40:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.344 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.344 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.344 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.344 [2024-11-04 14:40:08.233613] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.344 [2024-11-04 14:40:08.233675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:09.344 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.344 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:09.344 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.345 [2024-11-04 14:40:08.245670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.345 [2024-11-04 14:40:08.248295] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.345 [2024-11-04 14:40:08.248566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.345 [2024-11-04 14:40:08.248686] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.345 [2024-11-04 14:40:08.248824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.345 [2024-11-04 14:40:08.248847] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:09.345 [2024-11-04 14:40:08.248863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.345 "name": "Existed_Raid", 00:14:09.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.345 "strip_size_kb": 0, 00:14:09.345 "state": "configuring", 00:14:09.345 "raid_level": "raid1", 00:14:09.345 "superblock": false, 00:14:09.345 "num_base_bdevs": 4, 00:14:09.345 "num_base_bdevs_discovered": 1, 00:14:09.345 "num_base_bdevs_operational": 4, 00:14:09.345 "base_bdevs_list": [ 00:14:09.345 { 00:14:09.345 "name": "BaseBdev1", 00:14:09.345 "uuid": "f6e64f31-60bd-4e5a-853b-b28955e15aca", 00:14:09.345 "is_configured": true, 00:14:09.345 "data_offset": 0, 00:14:09.345 "data_size": 65536 00:14:09.345 }, 00:14:09.345 { 00:14:09.345 "name": "BaseBdev2", 00:14:09.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.345 "is_configured": false, 00:14:09.345 "data_offset": 0, 00:14:09.345 "data_size": 0 00:14:09.345 }, 00:14:09.345 { 00:14:09.345 "name": "BaseBdev3", 00:14:09.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.345 "is_configured": false, 00:14:09.345 "data_offset": 0, 00:14:09.345 "data_size": 0 00:14:09.345 }, 00:14:09.345 { 00:14:09.345 "name": "BaseBdev4", 00:14:09.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.345 "is_configured": false, 00:14:09.345 "data_offset": 0, 00:14:09.345 "data_size": 0 00:14:09.345 } 00:14:09.345 ] 00:14:09.345 }' 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.345 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.970 [2024-11-04 14:40:08.801038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:09.970 BaseBdev2 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.970 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.970 [ 00:14:09.970 { 00:14:09.970 "name": "BaseBdev2", 00:14:09.970 "aliases": [ 00:14:09.970 "1f48f17b-b8d3-44f9-8eed-4073aab2f5f4" 00:14:09.970 ], 00:14:09.970 "product_name": "Malloc disk", 00:14:09.970 "block_size": 512, 00:14:09.970 "num_blocks": 65536, 00:14:09.970 "uuid": "1f48f17b-b8d3-44f9-8eed-4073aab2f5f4", 00:14:09.970 "assigned_rate_limits": { 00:14:09.970 "rw_ios_per_sec": 0, 00:14:09.970 "rw_mbytes_per_sec": 0, 00:14:09.970 "r_mbytes_per_sec": 0, 00:14:09.970 "w_mbytes_per_sec": 0 00:14:09.970 }, 00:14:09.970 "claimed": true, 00:14:09.970 "claim_type": "exclusive_write", 00:14:09.970 "zoned": false, 00:14:09.970 "supported_io_types": { 00:14:09.970 "read": true, 00:14:09.970 "write": true, 00:14:09.970 "unmap": true, 00:14:09.970 "flush": true, 00:14:09.970 "reset": true, 00:14:09.970 "nvme_admin": false, 00:14:09.971 "nvme_io": false, 00:14:09.971 "nvme_io_md": false, 00:14:09.971 "write_zeroes": true, 00:14:09.971 "zcopy": true, 00:14:09.971 "get_zone_info": false, 00:14:09.971 "zone_management": false, 00:14:09.971 "zone_append": false, 00:14:09.971 "compare": false, 00:14:09.971 "compare_and_write": false, 00:14:09.971 "abort": true, 00:14:09.971 "seek_hole": false, 00:14:09.971 "seek_data": false, 00:14:09.971 "copy": true, 00:14:09.971 "nvme_iov_md": false 00:14:09.971 }, 00:14:09.971 "memory_domains": [ 00:14:09.971 { 00:14:09.971 "dma_device_id": "system", 00:14:09.971 "dma_device_type": 1 00:14:09.971 }, 00:14:09.971 { 00:14:09.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.971 "dma_device_type": 2 00:14:09.971 } 00:14:09.971 ], 00:14:09.971 "driver_specific": {} 00:14:09.971 } 00:14:09.971 ] 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.971 "name": "Existed_Raid", 00:14:09.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.971 "strip_size_kb": 0, 00:14:09.971 "state": "configuring", 00:14:09.971 "raid_level": "raid1", 00:14:09.971 "superblock": false, 00:14:09.971 "num_base_bdevs": 4, 00:14:09.971 "num_base_bdevs_discovered": 2, 00:14:09.971 "num_base_bdevs_operational": 4, 00:14:09.971 "base_bdevs_list": [ 00:14:09.971 { 00:14:09.971 "name": "BaseBdev1", 00:14:09.971 "uuid": "f6e64f31-60bd-4e5a-853b-b28955e15aca", 00:14:09.971 "is_configured": true, 00:14:09.971 "data_offset": 0, 00:14:09.971 "data_size": 65536 00:14:09.971 }, 00:14:09.971 { 00:14:09.971 "name": "BaseBdev2", 00:14:09.971 "uuid": "1f48f17b-b8d3-44f9-8eed-4073aab2f5f4", 00:14:09.971 "is_configured": true, 00:14:09.971 "data_offset": 0, 00:14:09.971 "data_size": 65536 00:14:09.971 }, 00:14:09.971 { 00:14:09.971 "name": "BaseBdev3", 00:14:09.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.971 "is_configured": false, 00:14:09.971 "data_offset": 0, 00:14:09.971 "data_size": 0 00:14:09.971 }, 00:14:09.971 { 00:14:09.971 "name": "BaseBdev4", 00:14:09.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.971 "is_configured": false, 00:14:09.971 "data_offset": 0, 00:14:09.971 "data_size": 0 00:14:09.971 } 00:14:09.971 ] 00:14:09.971 }' 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.971 14:40:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.540 [2024-11-04 14:40:09.413552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.540 BaseBdev3 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.540 [ 00:14:10.540 { 00:14:10.540 "name": "BaseBdev3", 00:14:10.540 "aliases": [ 00:14:10.540 "20f35d3a-4013-4a65-b1c5-7241e28e017d" 00:14:10.540 ], 00:14:10.540 "product_name": "Malloc disk", 00:14:10.540 "block_size": 512, 00:14:10.540 "num_blocks": 65536, 00:14:10.540 "uuid": "20f35d3a-4013-4a65-b1c5-7241e28e017d", 00:14:10.540 "assigned_rate_limits": { 00:14:10.540 "rw_ios_per_sec": 0, 00:14:10.540 "rw_mbytes_per_sec": 0, 00:14:10.540 "r_mbytes_per_sec": 0, 00:14:10.540 "w_mbytes_per_sec": 0 00:14:10.540 }, 00:14:10.540 "claimed": true, 00:14:10.540 "claim_type": "exclusive_write", 00:14:10.540 "zoned": false, 00:14:10.540 "supported_io_types": { 00:14:10.540 "read": true, 00:14:10.540 "write": true, 00:14:10.540 "unmap": true, 00:14:10.540 "flush": true, 00:14:10.540 "reset": true, 00:14:10.540 "nvme_admin": false, 00:14:10.540 "nvme_io": false, 00:14:10.540 "nvme_io_md": false, 00:14:10.540 "write_zeroes": true, 00:14:10.540 "zcopy": true, 00:14:10.540 "get_zone_info": false, 00:14:10.540 "zone_management": false, 00:14:10.540 "zone_append": false, 00:14:10.540 "compare": false, 00:14:10.540 "compare_and_write": false, 00:14:10.540 "abort": true, 00:14:10.540 "seek_hole": false, 00:14:10.540 "seek_data": false, 00:14:10.540 "copy": true, 00:14:10.540 "nvme_iov_md": false 00:14:10.540 }, 00:14:10.540 "memory_domains": [ 00:14:10.540 { 00:14:10.540 "dma_device_id": "system", 00:14:10.540 "dma_device_type": 1 00:14:10.540 }, 00:14:10.540 { 00:14:10.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.540 "dma_device_type": 2 00:14:10.540 } 00:14:10.540 ], 00:14:10.540 "driver_specific": {} 00:14:10.540 } 00:14:10.540 ] 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.540 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.541 "name": "Existed_Raid", 00:14:10.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.541 "strip_size_kb": 0, 00:14:10.541 "state": "configuring", 00:14:10.541 "raid_level": "raid1", 00:14:10.541 "superblock": false, 00:14:10.541 "num_base_bdevs": 4, 00:14:10.541 "num_base_bdevs_discovered": 3, 00:14:10.541 "num_base_bdevs_operational": 4, 00:14:10.541 "base_bdevs_list": [ 00:14:10.541 { 00:14:10.541 "name": "BaseBdev1", 00:14:10.541 "uuid": "f6e64f31-60bd-4e5a-853b-b28955e15aca", 00:14:10.541 "is_configured": true, 00:14:10.541 "data_offset": 0, 00:14:10.541 "data_size": 65536 00:14:10.541 }, 00:14:10.541 { 00:14:10.541 "name": "BaseBdev2", 00:14:10.541 "uuid": "1f48f17b-b8d3-44f9-8eed-4073aab2f5f4", 00:14:10.541 "is_configured": true, 00:14:10.541 "data_offset": 0, 00:14:10.541 "data_size": 65536 00:14:10.541 }, 00:14:10.541 { 00:14:10.541 "name": "BaseBdev3", 00:14:10.541 "uuid": "20f35d3a-4013-4a65-b1c5-7241e28e017d", 00:14:10.541 "is_configured": true, 00:14:10.541 "data_offset": 0, 00:14:10.541 "data_size": 65536 00:14:10.541 }, 00:14:10.541 { 00:14:10.541 "name": "BaseBdev4", 00:14:10.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.541 "is_configured": false, 00:14:10.541 "data_offset": 0, 00:14:10.541 "data_size": 0 00:14:10.541 } 00:14:10.541 ] 00:14:10.541 }' 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.541 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.106 14:40:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:11.106 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.106 14:40:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.106 [2024-11-04 14:40:10.010228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:11.106 [2024-11-04 14:40:10.010516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:11.106 [2024-11-04 14:40:10.010539] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:11.106 [2024-11-04 14:40:10.010904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:11.106 [2024-11-04 14:40:10.011196] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:11.106 [2024-11-04 14:40:10.011218] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:11.106 [2024-11-04 14:40:10.011543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.106 BaseBdev4 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.106 [ 00:14:11.106 { 00:14:11.106 "name": "BaseBdev4", 00:14:11.106 "aliases": [ 00:14:11.106 "3386bc7a-937f-4504-8f03-1e1f1b1eafa8" 00:14:11.106 ], 00:14:11.106 "product_name": "Malloc disk", 00:14:11.106 "block_size": 512, 00:14:11.106 "num_blocks": 65536, 00:14:11.106 "uuid": "3386bc7a-937f-4504-8f03-1e1f1b1eafa8", 00:14:11.106 "assigned_rate_limits": { 00:14:11.106 "rw_ios_per_sec": 0, 00:14:11.106 "rw_mbytes_per_sec": 0, 00:14:11.106 "r_mbytes_per_sec": 0, 00:14:11.106 "w_mbytes_per_sec": 0 00:14:11.106 }, 00:14:11.106 "claimed": true, 00:14:11.106 "claim_type": "exclusive_write", 00:14:11.106 "zoned": false, 00:14:11.106 "supported_io_types": { 00:14:11.106 "read": true, 00:14:11.106 "write": true, 00:14:11.106 "unmap": true, 00:14:11.106 "flush": true, 00:14:11.106 "reset": true, 00:14:11.106 "nvme_admin": false, 00:14:11.106 "nvme_io": false, 00:14:11.106 "nvme_io_md": false, 00:14:11.106 "write_zeroes": true, 00:14:11.106 "zcopy": true, 00:14:11.106 "get_zone_info": false, 00:14:11.106 "zone_management": false, 00:14:11.106 "zone_append": false, 00:14:11.106 "compare": false, 00:14:11.106 "compare_and_write": false, 00:14:11.106 "abort": true, 00:14:11.106 "seek_hole": false, 00:14:11.106 "seek_data": false, 00:14:11.106 "copy": true, 00:14:11.106 "nvme_iov_md": false 00:14:11.106 }, 00:14:11.106 "memory_domains": [ 00:14:11.106 { 00:14:11.106 "dma_device_id": "system", 00:14:11.106 "dma_device_type": 1 00:14:11.106 }, 00:14:11.106 { 00:14:11.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.106 "dma_device_type": 2 00:14:11.106 } 00:14:11.106 ], 00:14:11.106 "driver_specific": {} 00:14:11.106 } 00:14:11.106 ] 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.106 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.106 "name": "Existed_Raid", 00:14:11.106 "uuid": "faeb7f2b-3c62-4523-a1be-0ab9c1e6f7c3", 00:14:11.106 "strip_size_kb": 0, 00:14:11.106 "state": "online", 00:14:11.106 "raid_level": "raid1", 00:14:11.106 "superblock": false, 00:14:11.106 "num_base_bdevs": 4, 00:14:11.106 "num_base_bdevs_discovered": 4, 00:14:11.106 "num_base_bdevs_operational": 4, 00:14:11.106 "base_bdevs_list": [ 00:14:11.106 { 00:14:11.106 "name": "BaseBdev1", 00:14:11.106 "uuid": "f6e64f31-60bd-4e5a-853b-b28955e15aca", 00:14:11.106 "is_configured": true, 00:14:11.106 "data_offset": 0, 00:14:11.106 "data_size": 65536 00:14:11.106 }, 00:14:11.106 { 00:14:11.106 "name": "BaseBdev2", 00:14:11.106 "uuid": "1f48f17b-b8d3-44f9-8eed-4073aab2f5f4", 00:14:11.106 "is_configured": true, 00:14:11.106 "data_offset": 0, 00:14:11.106 "data_size": 65536 00:14:11.106 }, 00:14:11.106 { 00:14:11.107 "name": "BaseBdev3", 00:14:11.107 "uuid": "20f35d3a-4013-4a65-b1c5-7241e28e017d", 00:14:11.107 "is_configured": true, 00:14:11.107 "data_offset": 0, 00:14:11.107 "data_size": 65536 00:14:11.107 }, 00:14:11.107 { 00:14:11.107 "name": "BaseBdev4", 00:14:11.107 "uuid": "3386bc7a-937f-4504-8f03-1e1f1b1eafa8", 00:14:11.107 "is_configured": true, 00:14:11.107 "data_offset": 0, 00:14:11.107 "data_size": 65536 00:14:11.107 } 00:14:11.107 ] 00:14:11.107 }' 00:14:11.107 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.107 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.675 [2024-11-04 14:40:10.586958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.675 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:11.675 "name": "Existed_Raid", 00:14:11.675 "aliases": [ 00:14:11.675 "faeb7f2b-3c62-4523-a1be-0ab9c1e6f7c3" 00:14:11.675 ], 00:14:11.675 "product_name": "Raid Volume", 00:14:11.675 "block_size": 512, 00:14:11.675 "num_blocks": 65536, 00:14:11.675 "uuid": "faeb7f2b-3c62-4523-a1be-0ab9c1e6f7c3", 00:14:11.675 "assigned_rate_limits": { 00:14:11.675 "rw_ios_per_sec": 0, 00:14:11.675 "rw_mbytes_per_sec": 0, 00:14:11.675 "r_mbytes_per_sec": 0, 00:14:11.675 "w_mbytes_per_sec": 0 00:14:11.675 }, 00:14:11.675 "claimed": false, 00:14:11.675 "zoned": false, 00:14:11.675 "supported_io_types": { 00:14:11.675 "read": true, 00:14:11.675 "write": true, 00:14:11.675 "unmap": false, 00:14:11.675 "flush": false, 00:14:11.675 "reset": true, 00:14:11.675 "nvme_admin": false, 00:14:11.675 "nvme_io": false, 00:14:11.675 "nvme_io_md": false, 00:14:11.675 "write_zeroes": true, 00:14:11.675 "zcopy": false, 00:14:11.675 "get_zone_info": false, 00:14:11.675 "zone_management": false, 00:14:11.675 "zone_append": false, 00:14:11.675 "compare": false, 00:14:11.675 "compare_and_write": false, 00:14:11.675 "abort": false, 00:14:11.675 "seek_hole": false, 00:14:11.675 "seek_data": false, 00:14:11.675 "copy": false, 00:14:11.675 "nvme_iov_md": false 00:14:11.675 }, 00:14:11.675 "memory_domains": [ 00:14:11.675 { 00:14:11.675 "dma_device_id": "system", 00:14:11.675 "dma_device_type": 1 00:14:11.675 }, 00:14:11.675 { 00:14:11.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.675 "dma_device_type": 2 00:14:11.675 }, 00:14:11.675 { 00:14:11.675 "dma_device_id": "system", 00:14:11.675 "dma_device_type": 1 00:14:11.675 }, 00:14:11.675 { 00:14:11.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.675 "dma_device_type": 2 00:14:11.675 }, 00:14:11.675 { 00:14:11.675 "dma_device_id": "system", 00:14:11.675 "dma_device_type": 1 00:14:11.675 }, 00:14:11.675 { 00:14:11.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.675 "dma_device_type": 2 00:14:11.675 }, 00:14:11.675 { 00:14:11.675 "dma_device_id": "system", 00:14:11.675 "dma_device_type": 1 00:14:11.675 }, 00:14:11.675 { 00:14:11.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.675 "dma_device_type": 2 00:14:11.675 } 00:14:11.675 ], 00:14:11.675 "driver_specific": { 00:14:11.675 "raid": { 00:14:11.675 "uuid": "faeb7f2b-3c62-4523-a1be-0ab9c1e6f7c3", 00:14:11.675 "strip_size_kb": 0, 00:14:11.675 "state": "online", 00:14:11.675 "raid_level": "raid1", 00:14:11.675 "superblock": false, 00:14:11.675 "num_base_bdevs": 4, 00:14:11.675 "num_base_bdevs_discovered": 4, 00:14:11.675 "num_base_bdevs_operational": 4, 00:14:11.675 "base_bdevs_list": [ 00:14:11.675 { 00:14:11.675 "name": "BaseBdev1", 00:14:11.675 "uuid": "f6e64f31-60bd-4e5a-853b-b28955e15aca", 00:14:11.675 "is_configured": true, 00:14:11.675 "data_offset": 0, 00:14:11.675 "data_size": 65536 00:14:11.675 }, 00:14:11.675 { 00:14:11.675 "name": "BaseBdev2", 00:14:11.675 "uuid": "1f48f17b-b8d3-44f9-8eed-4073aab2f5f4", 00:14:11.675 "is_configured": true, 00:14:11.675 "data_offset": 0, 00:14:11.675 "data_size": 65536 00:14:11.675 }, 00:14:11.675 { 00:14:11.675 "name": "BaseBdev3", 00:14:11.675 "uuid": "20f35d3a-4013-4a65-b1c5-7241e28e017d", 00:14:11.675 "is_configured": true, 00:14:11.675 "data_offset": 0, 00:14:11.676 "data_size": 65536 00:14:11.676 }, 00:14:11.676 { 00:14:11.676 "name": "BaseBdev4", 00:14:11.676 "uuid": "3386bc7a-937f-4504-8f03-1e1f1b1eafa8", 00:14:11.676 "is_configured": true, 00:14:11.676 "data_offset": 0, 00:14:11.676 "data_size": 65536 00:14:11.676 } 00:14:11.676 ] 00:14:11.676 } 00:14:11.676 } 00:14:11.676 }' 00:14:11.676 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:11.676 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:11.676 BaseBdev2 00:14:11.676 BaseBdev3 00:14:11.676 BaseBdev4' 00:14:11.676 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.676 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:11.676 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.676 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.676 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:11.676 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.676 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.676 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.955 14:40:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.955 [2024-11-04 14:40:10.998725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.213 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.213 "name": "Existed_Raid", 00:14:12.213 "uuid": "faeb7f2b-3c62-4523-a1be-0ab9c1e6f7c3", 00:14:12.213 "strip_size_kb": 0, 00:14:12.213 "state": "online", 00:14:12.213 "raid_level": "raid1", 00:14:12.213 "superblock": false, 00:14:12.213 "num_base_bdevs": 4, 00:14:12.213 "num_base_bdevs_discovered": 3, 00:14:12.213 "num_base_bdevs_operational": 3, 00:14:12.213 "base_bdevs_list": [ 00:14:12.213 { 00:14:12.213 "name": null, 00:14:12.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.213 "is_configured": false, 00:14:12.213 "data_offset": 0, 00:14:12.214 "data_size": 65536 00:14:12.214 }, 00:14:12.214 { 00:14:12.214 "name": "BaseBdev2", 00:14:12.214 "uuid": "1f48f17b-b8d3-44f9-8eed-4073aab2f5f4", 00:14:12.214 "is_configured": true, 00:14:12.214 "data_offset": 0, 00:14:12.214 "data_size": 65536 00:14:12.214 }, 00:14:12.214 { 00:14:12.214 "name": "BaseBdev3", 00:14:12.214 "uuid": "20f35d3a-4013-4a65-b1c5-7241e28e017d", 00:14:12.214 "is_configured": true, 00:14:12.214 "data_offset": 0, 00:14:12.214 "data_size": 65536 00:14:12.214 }, 00:14:12.214 { 00:14:12.214 "name": "BaseBdev4", 00:14:12.214 "uuid": "3386bc7a-937f-4504-8f03-1e1f1b1eafa8", 00:14:12.214 "is_configured": true, 00:14:12.214 "data_offset": 0, 00:14:12.214 "data_size": 65536 00:14:12.214 } 00:14:12.214 ] 00:14:12.214 }' 00:14:12.214 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.214 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.781 [2024-11-04 14:40:11.672799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.781 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.781 [2024-11-04 14:40:11.821890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.039 14:40:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.039 [2024-11-04 14:40:11.974879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:13.039 [2024-11-04 14:40:11.975032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.039 [2024-11-04 14:40:12.056991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.039 [2024-11-04 14:40:12.057251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.039 [2024-11-04 14:40:12.057403] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:13.039 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.040 BaseBdev2 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.040 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.299 [ 00:14:13.299 { 00:14:13.299 "name": "BaseBdev2", 00:14:13.299 "aliases": [ 00:14:13.299 "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7" 00:14:13.299 ], 00:14:13.299 "product_name": "Malloc disk", 00:14:13.299 "block_size": 512, 00:14:13.299 "num_blocks": 65536, 00:14:13.299 "uuid": "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7", 00:14:13.299 "assigned_rate_limits": { 00:14:13.299 "rw_ios_per_sec": 0, 00:14:13.299 "rw_mbytes_per_sec": 0, 00:14:13.299 "r_mbytes_per_sec": 0, 00:14:13.299 "w_mbytes_per_sec": 0 00:14:13.299 }, 00:14:13.299 "claimed": false, 00:14:13.299 "zoned": false, 00:14:13.299 "supported_io_types": { 00:14:13.299 "read": true, 00:14:13.299 "write": true, 00:14:13.299 "unmap": true, 00:14:13.299 "flush": true, 00:14:13.299 "reset": true, 00:14:13.299 "nvme_admin": false, 00:14:13.299 "nvme_io": false, 00:14:13.299 "nvme_io_md": false, 00:14:13.299 "write_zeroes": true, 00:14:13.299 "zcopy": true, 00:14:13.299 "get_zone_info": false, 00:14:13.299 "zone_management": false, 00:14:13.299 "zone_append": false, 00:14:13.299 "compare": false, 00:14:13.299 "compare_and_write": false, 00:14:13.299 "abort": true, 00:14:13.299 "seek_hole": false, 00:14:13.299 "seek_data": false, 00:14:13.299 "copy": true, 00:14:13.299 "nvme_iov_md": false 00:14:13.299 }, 00:14:13.299 "memory_domains": [ 00:14:13.299 { 00:14:13.299 "dma_device_id": "system", 00:14:13.299 "dma_device_type": 1 00:14:13.299 }, 00:14:13.299 { 00:14:13.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.299 "dma_device_type": 2 00:14:13.299 } 00:14:13.299 ], 00:14:13.299 "driver_specific": {} 00:14:13.299 } 00:14:13.299 ] 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.299 BaseBdev3 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.299 [ 00:14:13.299 { 00:14:13.299 "name": "BaseBdev3", 00:14:13.299 "aliases": [ 00:14:13.299 "f9499b3d-fd21-4d8d-b265-48c1274622b8" 00:14:13.299 ], 00:14:13.299 "product_name": "Malloc disk", 00:14:13.299 "block_size": 512, 00:14:13.299 "num_blocks": 65536, 00:14:13.299 "uuid": "f9499b3d-fd21-4d8d-b265-48c1274622b8", 00:14:13.299 "assigned_rate_limits": { 00:14:13.299 "rw_ios_per_sec": 0, 00:14:13.299 "rw_mbytes_per_sec": 0, 00:14:13.299 "r_mbytes_per_sec": 0, 00:14:13.299 "w_mbytes_per_sec": 0 00:14:13.299 }, 00:14:13.299 "claimed": false, 00:14:13.299 "zoned": false, 00:14:13.299 "supported_io_types": { 00:14:13.299 "read": true, 00:14:13.299 "write": true, 00:14:13.299 "unmap": true, 00:14:13.299 "flush": true, 00:14:13.299 "reset": true, 00:14:13.299 "nvme_admin": false, 00:14:13.299 "nvme_io": false, 00:14:13.299 "nvme_io_md": false, 00:14:13.299 "write_zeroes": true, 00:14:13.299 "zcopy": true, 00:14:13.299 "get_zone_info": false, 00:14:13.299 "zone_management": false, 00:14:13.299 "zone_append": false, 00:14:13.299 "compare": false, 00:14:13.299 "compare_and_write": false, 00:14:13.299 "abort": true, 00:14:13.299 "seek_hole": false, 00:14:13.299 "seek_data": false, 00:14:13.299 "copy": true, 00:14:13.299 "nvme_iov_md": false 00:14:13.299 }, 00:14:13.299 "memory_domains": [ 00:14:13.299 { 00:14:13.299 "dma_device_id": "system", 00:14:13.299 "dma_device_type": 1 00:14:13.299 }, 00:14:13.299 { 00:14:13.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.299 "dma_device_type": 2 00:14:13.299 } 00:14:13.299 ], 00:14:13.299 "driver_specific": {} 00:14:13.299 } 00:14:13.299 ] 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.299 BaseBdev4 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:13.299 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.300 [ 00:14:13.300 { 00:14:13.300 "name": "BaseBdev4", 00:14:13.300 "aliases": [ 00:14:13.300 "f55bbfeb-de47-4c23-b6cd-8460d72e732c" 00:14:13.300 ], 00:14:13.300 "product_name": "Malloc disk", 00:14:13.300 "block_size": 512, 00:14:13.300 "num_blocks": 65536, 00:14:13.300 "uuid": "f55bbfeb-de47-4c23-b6cd-8460d72e732c", 00:14:13.300 "assigned_rate_limits": { 00:14:13.300 "rw_ios_per_sec": 0, 00:14:13.300 "rw_mbytes_per_sec": 0, 00:14:13.300 "r_mbytes_per_sec": 0, 00:14:13.300 "w_mbytes_per_sec": 0 00:14:13.300 }, 00:14:13.300 "claimed": false, 00:14:13.300 "zoned": false, 00:14:13.300 "supported_io_types": { 00:14:13.300 "read": true, 00:14:13.300 "write": true, 00:14:13.300 "unmap": true, 00:14:13.300 "flush": true, 00:14:13.300 "reset": true, 00:14:13.300 "nvme_admin": false, 00:14:13.300 "nvme_io": false, 00:14:13.300 "nvme_io_md": false, 00:14:13.300 "write_zeroes": true, 00:14:13.300 "zcopy": true, 00:14:13.300 "get_zone_info": false, 00:14:13.300 "zone_management": false, 00:14:13.300 "zone_append": false, 00:14:13.300 "compare": false, 00:14:13.300 "compare_and_write": false, 00:14:13.300 "abort": true, 00:14:13.300 "seek_hole": false, 00:14:13.300 "seek_data": false, 00:14:13.300 "copy": true, 00:14:13.300 "nvme_iov_md": false 00:14:13.300 }, 00:14:13.300 "memory_domains": [ 00:14:13.300 { 00:14:13.300 "dma_device_id": "system", 00:14:13.300 "dma_device_type": 1 00:14:13.300 }, 00:14:13.300 { 00:14:13.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.300 "dma_device_type": 2 00:14:13.300 } 00:14:13.300 ], 00:14:13.300 "driver_specific": {} 00:14:13.300 } 00:14:13.300 ] 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.300 [2024-11-04 14:40:12.340414] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:13.300 [2024-11-04 14:40:12.340604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:13.300 [2024-11-04 14:40:12.340739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.300 [2024-11-04 14:40:12.343246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.300 [2024-11-04 14:40:12.343433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.300 "name": "Existed_Raid", 00:14:13.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.300 "strip_size_kb": 0, 00:14:13.300 "state": "configuring", 00:14:13.300 "raid_level": "raid1", 00:14:13.300 "superblock": false, 00:14:13.300 "num_base_bdevs": 4, 00:14:13.300 "num_base_bdevs_discovered": 3, 00:14:13.300 "num_base_bdevs_operational": 4, 00:14:13.300 "base_bdevs_list": [ 00:14:13.300 { 00:14:13.300 "name": "BaseBdev1", 00:14:13.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.300 "is_configured": false, 00:14:13.300 "data_offset": 0, 00:14:13.300 "data_size": 0 00:14:13.300 }, 00:14:13.300 { 00:14:13.300 "name": "BaseBdev2", 00:14:13.300 "uuid": "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7", 00:14:13.300 "is_configured": true, 00:14:13.300 "data_offset": 0, 00:14:13.300 "data_size": 65536 00:14:13.300 }, 00:14:13.300 { 00:14:13.300 "name": "BaseBdev3", 00:14:13.300 "uuid": "f9499b3d-fd21-4d8d-b265-48c1274622b8", 00:14:13.300 "is_configured": true, 00:14:13.300 "data_offset": 0, 00:14:13.300 "data_size": 65536 00:14:13.300 }, 00:14:13.300 { 00:14:13.300 "name": "BaseBdev4", 00:14:13.300 "uuid": "f55bbfeb-de47-4c23-b6cd-8460d72e732c", 00:14:13.300 "is_configured": true, 00:14:13.300 "data_offset": 0, 00:14:13.300 "data_size": 65536 00:14:13.300 } 00:14:13.300 ] 00:14:13.300 }' 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.300 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.867 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:13.867 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.868 [2024-11-04 14:40:12.840570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.868 "name": "Existed_Raid", 00:14:13.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.868 "strip_size_kb": 0, 00:14:13.868 "state": "configuring", 00:14:13.868 "raid_level": "raid1", 00:14:13.868 "superblock": false, 00:14:13.868 "num_base_bdevs": 4, 00:14:13.868 "num_base_bdevs_discovered": 2, 00:14:13.868 "num_base_bdevs_operational": 4, 00:14:13.868 "base_bdevs_list": [ 00:14:13.868 { 00:14:13.868 "name": "BaseBdev1", 00:14:13.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.868 "is_configured": false, 00:14:13.868 "data_offset": 0, 00:14:13.868 "data_size": 0 00:14:13.868 }, 00:14:13.868 { 00:14:13.868 "name": null, 00:14:13.868 "uuid": "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7", 00:14:13.868 "is_configured": false, 00:14:13.868 "data_offset": 0, 00:14:13.868 "data_size": 65536 00:14:13.868 }, 00:14:13.868 { 00:14:13.868 "name": "BaseBdev3", 00:14:13.868 "uuid": "f9499b3d-fd21-4d8d-b265-48c1274622b8", 00:14:13.868 "is_configured": true, 00:14:13.868 "data_offset": 0, 00:14:13.868 "data_size": 65536 00:14:13.868 }, 00:14:13.868 { 00:14:13.868 "name": "BaseBdev4", 00:14:13.868 "uuid": "f55bbfeb-de47-4c23-b6cd-8460d72e732c", 00:14:13.868 "is_configured": true, 00:14:13.868 "data_offset": 0, 00:14:13.868 "data_size": 65536 00:14:13.868 } 00:14:13.868 ] 00:14:13.868 }' 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.868 14:40:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.434 [2024-11-04 14:40:13.490461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.434 BaseBdev1 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.434 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.435 [ 00:14:14.435 { 00:14:14.435 "name": "BaseBdev1", 00:14:14.435 "aliases": [ 00:14:14.435 "48bf6138-2661-4304-93de-ba2585e2a1a8" 00:14:14.435 ], 00:14:14.435 "product_name": "Malloc disk", 00:14:14.435 "block_size": 512, 00:14:14.435 "num_blocks": 65536, 00:14:14.435 "uuid": "48bf6138-2661-4304-93de-ba2585e2a1a8", 00:14:14.435 "assigned_rate_limits": { 00:14:14.435 "rw_ios_per_sec": 0, 00:14:14.435 "rw_mbytes_per_sec": 0, 00:14:14.435 "r_mbytes_per_sec": 0, 00:14:14.435 "w_mbytes_per_sec": 0 00:14:14.435 }, 00:14:14.435 "claimed": true, 00:14:14.435 "claim_type": "exclusive_write", 00:14:14.435 "zoned": false, 00:14:14.435 "supported_io_types": { 00:14:14.435 "read": true, 00:14:14.435 "write": true, 00:14:14.435 "unmap": true, 00:14:14.435 "flush": true, 00:14:14.435 "reset": true, 00:14:14.435 "nvme_admin": false, 00:14:14.435 "nvme_io": false, 00:14:14.435 "nvme_io_md": false, 00:14:14.435 "write_zeroes": true, 00:14:14.435 "zcopy": true, 00:14:14.435 "get_zone_info": false, 00:14:14.435 "zone_management": false, 00:14:14.435 "zone_append": false, 00:14:14.435 "compare": false, 00:14:14.435 "compare_and_write": false, 00:14:14.435 "abort": true, 00:14:14.435 "seek_hole": false, 00:14:14.435 "seek_data": false, 00:14:14.435 "copy": true, 00:14:14.435 "nvme_iov_md": false 00:14:14.435 }, 00:14:14.435 "memory_domains": [ 00:14:14.435 { 00:14:14.435 "dma_device_id": "system", 00:14:14.435 "dma_device_type": 1 00:14:14.435 }, 00:14:14.435 { 00:14:14.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.435 "dma_device_type": 2 00:14:14.435 } 00:14:14.435 ], 00:14:14.435 "driver_specific": {} 00:14:14.435 } 00:14:14.435 ] 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.435 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.694 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.694 "name": "Existed_Raid", 00:14:14.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.694 "strip_size_kb": 0, 00:14:14.694 "state": "configuring", 00:14:14.694 "raid_level": "raid1", 00:14:14.694 "superblock": false, 00:14:14.694 "num_base_bdevs": 4, 00:14:14.694 "num_base_bdevs_discovered": 3, 00:14:14.694 "num_base_bdevs_operational": 4, 00:14:14.694 "base_bdevs_list": [ 00:14:14.694 { 00:14:14.694 "name": "BaseBdev1", 00:14:14.694 "uuid": "48bf6138-2661-4304-93de-ba2585e2a1a8", 00:14:14.694 "is_configured": true, 00:14:14.694 "data_offset": 0, 00:14:14.694 "data_size": 65536 00:14:14.694 }, 00:14:14.694 { 00:14:14.694 "name": null, 00:14:14.694 "uuid": "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7", 00:14:14.694 "is_configured": false, 00:14:14.694 "data_offset": 0, 00:14:14.694 "data_size": 65536 00:14:14.694 }, 00:14:14.694 { 00:14:14.694 "name": "BaseBdev3", 00:14:14.694 "uuid": "f9499b3d-fd21-4d8d-b265-48c1274622b8", 00:14:14.694 "is_configured": true, 00:14:14.694 "data_offset": 0, 00:14:14.694 "data_size": 65536 00:14:14.694 }, 00:14:14.694 { 00:14:14.694 "name": "BaseBdev4", 00:14:14.694 "uuid": "f55bbfeb-de47-4c23-b6cd-8460d72e732c", 00:14:14.694 "is_configured": true, 00:14:14.694 "data_offset": 0, 00:14:14.694 "data_size": 65536 00:14:14.694 } 00:14:14.694 ] 00:14:14.694 }' 00:14:14.694 14:40:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.694 14:40:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.952 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.952 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:14.952 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.952 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.952 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.210 [2024-11-04 14:40:14.106718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.210 "name": "Existed_Raid", 00:14:15.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.210 "strip_size_kb": 0, 00:14:15.210 "state": "configuring", 00:14:15.210 "raid_level": "raid1", 00:14:15.210 "superblock": false, 00:14:15.210 "num_base_bdevs": 4, 00:14:15.210 "num_base_bdevs_discovered": 2, 00:14:15.210 "num_base_bdevs_operational": 4, 00:14:15.210 "base_bdevs_list": [ 00:14:15.210 { 00:14:15.210 "name": "BaseBdev1", 00:14:15.210 "uuid": "48bf6138-2661-4304-93de-ba2585e2a1a8", 00:14:15.210 "is_configured": true, 00:14:15.210 "data_offset": 0, 00:14:15.210 "data_size": 65536 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "name": null, 00:14:15.210 "uuid": "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7", 00:14:15.210 "is_configured": false, 00:14:15.210 "data_offset": 0, 00:14:15.210 "data_size": 65536 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "name": null, 00:14:15.210 "uuid": "f9499b3d-fd21-4d8d-b265-48c1274622b8", 00:14:15.210 "is_configured": false, 00:14:15.210 "data_offset": 0, 00:14:15.210 "data_size": 65536 00:14:15.210 }, 00:14:15.210 { 00:14:15.210 "name": "BaseBdev4", 00:14:15.210 "uuid": "f55bbfeb-de47-4c23-b6cd-8460d72e732c", 00:14:15.210 "is_configured": true, 00:14:15.210 "data_offset": 0, 00:14:15.210 "data_size": 65536 00:14:15.210 } 00:14:15.210 ] 00:14:15.210 }' 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.210 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.777 [2024-11-04 14:40:14.686881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.777 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.778 "name": "Existed_Raid", 00:14:15.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.778 "strip_size_kb": 0, 00:14:15.778 "state": "configuring", 00:14:15.778 "raid_level": "raid1", 00:14:15.778 "superblock": false, 00:14:15.778 "num_base_bdevs": 4, 00:14:15.778 "num_base_bdevs_discovered": 3, 00:14:15.778 "num_base_bdevs_operational": 4, 00:14:15.778 "base_bdevs_list": [ 00:14:15.778 { 00:14:15.778 "name": "BaseBdev1", 00:14:15.778 "uuid": "48bf6138-2661-4304-93de-ba2585e2a1a8", 00:14:15.778 "is_configured": true, 00:14:15.778 "data_offset": 0, 00:14:15.778 "data_size": 65536 00:14:15.778 }, 00:14:15.778 { 00:14:15.778 "name": null, 00:14:15.778 "uuid": "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7", 00:14:15.778 "is_configured": false, 00:14:15.778 "data_offset": 0, 00:14:15.778 "data_size": 65536 00:14:15.778 }, 00:14:15.778 { 00:14:15.778 "name": "BaseBdev3", 00:14:15.778 "uuid": "f9499b3d-fd21-4d8d-b265-48c1274622b8", 00:14:15.778 "is_configured": true, 00:14:15.778 "data_offset": 0, 00:14:15.778 "data_size": 65536 00:14:15.778 }, 00:14:15.778 { 00:14:15.778 "name": "BaseBdev4", 00:14:15.778 "uuid": "f55bbfeb-de47-4c23-b6cd-8460d72e732c", 00:14:15.778 "is_configured": true, 00:14:15.778 "data_offset": 0, 00:14:15.778 "data_size": 65536 00:14:15.778 } 00:14:15.778 ] 00:14:15.778 }' 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.778 14:40:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.345 [2024-11-04 14:40:15.287106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.345 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.346 "name": "Existed_Raid", 00:14:16.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.346 "strip_size_kb": 0, 00:14:16.346 "state": "configuring", 00:14:16.346 "raid_level": "raid1", 00:14:16.346 "superblock": false, 00:14:16.346 "num_base_bdevs": 4, 00:14:16.346 "num_base_bdevs_discovered": 2, 00:14:16.346 "num_base_bdevs_operational": 4, 00:14:16.346 "base_bdevs_list": [ 00:14:16.346 { 00:14:16.346 "name": null, 00:14:16.346 "uuid": "48bf6138-2661-4304-93de-ba2585e2a1a8", 00:14:16.346 "is_configured": false, 00:14:16.346 "data_offset": 0, 00:14:16.346 "data_size": 65536 00:14:16.346 }, 00:14:16.346 { 00:14:16.346 "name": null, 00:14:16.346 "uuid": "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7", 00:14:16.346 "is_configured": false, 00:14:16.346 "data_offset": 0, 00:14:16.346 "data_size": 65536 00:14:16.346 }, 00:14:16.346 { 00:14:16.346 "name": "BaseBdev3", 00:14:16.346 "uuid": "f9499b3d-fd21-4d8d-b265-48c1274622b8", 00:14:16.346 "is_configured": true, 00:14:16.346 "data_offset": 0, 00:14:16.346 "data_size": 65536 00:14:16.346 }, 00:14:16.346 { 00:14:16.346 "name": "BaseBdev4", 00:14:16.346 "uuid": "f55bbfeb-de47-4c23-b6cd-8460d72e732c", 00:14:16.346 "is_configured": true, 00:14:16.346 "data_offset": 0, 00:14:16.346 "data_size": 65536 00:14:16.346 } 00:14:16.346 ] 00:14:16.346 }' 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.346 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.912 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.912 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:16.912 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.912 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.912 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.912 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.913 [2024-11-04 14:40:15.963353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.913 14:40:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.913 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.913 "name": "Existed_Raid", 00:14:16.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.913 "strip_size_kb": 0, 00:14:16.913 "state": "configuring", 00:14:16.913 "raid_level": "raid1", 00:14:16.913 "superblock": false, 00:14:16.913 "num_base_bdevs": 4, 00:14:16.913 "num_base_bdevs_discovered": 3, 00:14:16.913 "num_base_bdevs_operational": 4, 00:14:16.913 "base_bdevs_list": [ 00:14:16.913 { 00:14:16.913 "name": null, 00:14:16.913 "uuid": "48bf6138-2661-4304-93de-ba2585e2a1a8", 00:14:16.913 "is_configured": false, 00:14:16.913 "data_offset": 0, 00:14:16.913 "data_size": 65536 00:14:16.913 }, 00:14:16.913 { 00:14:16.913 "name": "BaseBdev2", 00:14:16.913 "uuid": "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7", 00:14:16.913 "is_configured": true, 00:14:16.913 "data_offset": 0, 00:14:16.913 "data_size": 65536 00:14:16.913 }, 00:14:16.913 { 00:14:16.913 "name": "BaseBdev3", 00:14:16.913 "uuid": "f9499b3d-fd21-4d8d-b265-48c1274622b8", 00:14:16.913 "is_configured": true, 00:14:16.913 "data_offset": 0, 00:14:16.913 "data_size": 65536 00:14:16.913 }, 00:14:16.913 { 00:14:16.913 "name": "BaseBdev4", 00:14:16.913 "uuid": "f55bbfeb-de47-4c23-b6cd-8460d72e732c", 00:14:16.913 "is_configured": true, 00:14:16.913 "data_offset": 0, 00:14:16.913 "data_size": 65536 00:14:16.913 } 00:14:16.913 ] 00:14:16.913 }' 00:14:16.913 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.913 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.481 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 48bf6138-2661-4304-93de-ba2585e2a1a8 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.739 [2024-11-04 14:40:16.646630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:17.739 [2024-11-04 14:40:16.646993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:17.739 [2024-11-04 14:40:16.647023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:17.739 [2024-11-04 14:40:16.647370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:17.739 [2024-11-04 14:40:16.647619] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:17.739 [2024-11-04 14:40:16.647634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:17.739 [2024-11-04 14:40:16.647996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.739 NewBaseBdev 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.739 [ 00:14:17.739 { 00:14:17.739 "name": "NewBaseBdev", 00:14:17.739 "aliases": [ 00:14:17.739 "48bf6138-2661-4304-93de-ba2585e2a1a8" 00:14:17.739 ], 00:14:17.739 "product_name": "Malloc disk", 00:14:17.739 "block_size": 512, 00:14:17.739 "num_blocks": 65536, 00:14:17.739 "uuid": "48bf6138-2661-4304-93de-ba2585e2a1a8", 00:14:17.739 "assigned_rate_limits": { 00:14:17.739 "rw_ios_per_sec": 0, 00:14:17.739 "rw_mbytes_per_sec": 0, 00:14:17.739 "r_mbytes_per_sec": 0, 00:14:17.739 "w_mbytes_per_sec": 0 00:14:17.739 }, 00:14:17.739 "claimed": true, 00:14:17.739 "claim_type": "exclusive_write", 00:14:17.739 "zoned": false, 00:14:17.739 "supported_io_types": { 00:14:17.739 "read": true, 00:14:17.739 "write": true, 00:14:17.739 "unmap": true, 00:14:17.739 "flush": true, 00:14:17.739 "reset": true, 00:14:17.739 "nvme_admin": false, 00:14:17.739 "nvme_io": false, 00:14:17.739 "nvme_io_md": false, 00:14:17.739 "write_zeroes": true, 00:14:17.739 "zcopy": true, 00:14:17.739 "get_zone_info": false, 00:14:17.739 "zone_management": false, 00:14:17.739 "zone_append": false, 00:14:17.739 "compare": false, 00:14:17.739 "compare_and_write": false, 00:14:17.739 "abort": true, 00:14:17.739 "seek_hole": false, 00:14:17.739 "seek_data": false, 00:14:17.739 "copy": true, 00:14:17.739 "nvme_iov_md": false 00:14:17.739 }, 00:14:17.739 "memory_domains": [ 00:14:17.739 { 00:14:17.739 "dma_device_id": "system", 00:14:17.739 "dma_device_type": 1 00:14:17.739 }, 00:14:17.739 { 00:14:17.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.739 "dma_device_type": 2 00:14:17.739 } 00:14:17.739 ], 00:14:17.739 "driver_specific": {} 00:14:17.739 } 00:14:17.739 ] 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.739 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.739 "name": "Existed_Raid", 00:14:17.739 "uuid": "6f9c76d7-e90f-43c9-8e72-bdc22a287617", 00:14:17.739 "strip_size_kb": 0, 00:14:17.739 "state": "online", 00:14:17.739 "raid_level": "raid1", 00:14:17.739 "superblock": false, 00:14:17.739 "num_base_bdevs": 4, 00:14:17.739 "num_base_bdevs_discovered": 4, 00:14:17.739 "num_base_bdevs_operational": 4, 00:14:17.739 "base_bdevs_list": [ 00:14:17.739 { 00:14:17.739 "name": "NewBaseBdev", 00:14:17.739 "uuid": "48bf6138-2661-4304-93de-ba2585e2a1a8", 00:14:17.739 "is_configured": true, 00:14:17.739 "data_offset": 0, 00:14:17.739 "data_size": 65536 00:14:17.739 }, 00:14:17.739 { 00:14:17.739 "name": "BaseBdev2", 00:14:17.739 "uuid": "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7", 00:14:17.739 "is_configured": true, 00:14:17.739 "data_offset": 0, 00:14:17.739 "data_size": 65536 00:14:17.739 }, 00:14:17.739 { 00:14:17.739 "name": "BaseBdev3", 00:14:17.739 "uuid": "f9499b3d-fd21-4d8d-b265-48c1274622b8", 00:14:17.739 "is_configured": true, 00:14:17.739 "data_offset": 0, 00:14:17.739 "data_size": 65536 00:14:17.739 }, 00:14:17.739 { 00:14:17.739 "name": "BaseBdev4", 00:14:17.739 "uuid": "f55bbfeb-de47-4c23-b6cd-8460d72e732c", 00:14:17.739 "is_configured": true, 00:14:17.739 "data_offset": 0, 00:14:17.740 "data_size": 65536 00:14:17.740 } 00:14:17.740 ] 00:14:17.740 }' 00:14:17.740 14:40:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.740 14:40:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:18.305 [2024-11-04 14:40:17.223296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.305 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:18.305 "name": "Existed_Raid", 00:14:18.305 "aliases": [ 00:14:18.305 "6f9c76d7-e90f-43c9-8e72-bdc22a287617" 00:14:18.305 ], 00:14:18.305 "product_name": "Raid Volume", 00:14:18.305 "block_size": 512, 00:14:18.305 "num_blocks": 65536, 00:14:18.305 "uuid": "6f9c76d7-e90f-43c9-8e72-bdc22a287617", 00:14:18.305 "assigned_rate_limits": { 00:14:18.305 "rw_ios_per_sec": 0, 00:14:18.305 "rw_mbytes_per_sec": 0, 00:14:18.305 "r_mbytes_per_sec": 0, 00:14:18.305 "w_mbytes_per_sec": 0 00:14:18.305 }, 00:14:18.305 "claimed": false, 00:14:18.305 "zoned": false, 00:14:18.305 "supported_io_types": { 00:14:18.305 "read": true, 00:14:18.305 "write": true, 00:14:18.305 "unmap": false, 00:14:18.305 "flush": false, 00:14:18.305 "reset": true, 00:14:18.305 "nvme_admin": false, 00:14:18.305 "nvme_io": false, 00:14:18.305 "nvme_io_md": false, 00:14:18.305 "write_zeroes": true, 00:14:18.305 "zcopy": false, 00:14:18.305 "get_zone_info": false, 00:14:18.305 "zone_management": false, 00:14:18.305 "zone_append": false, 00:14:18.305 "compare": false, 00:14:18.305 "compare_and_write": false, 00:14:18.305 "abort": false, 00:14:18.305 "seek_hole": false, 00:14:18.305 "seek_data": false, 00:14:18.305 "copy": false, 00:14:18.305 "nvme_iov_md": false 00:14:18.305 }, 00:14:18.305 "memory_domains": [ 00:14:18.305 { 00:14:18.305 "dma_device_id": "system", 00:14:18.305 "dma_device_type": 1 00:14:18.305 }, 00:14:18.305 { 00:14:18.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.305 "dma_device_type": 2 00:14:18.305 }, 00:14:18.305 { 00:14:18.305 "dma_device_id": "system", 00:14:18.305 "dma_device_type": 1 00:14:18.305 }, 00:14:18.305 { 00:14:18.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.305 "dma_device_type": 2 00:14:18.305 }, 00:14:18.305 { 00:14:18.305 "dma_device_id": "system", 00:14:18.305 "dma_device_type": 1 00:14:18.305 }, 00:14:18.305 { 00:14:18.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.305 "dma_device_type": 2 00:14:18.305 }, 00:14:18.305 { 00:14:18.305 "dma_device_id": "system", 00:14:18.305 "dma_device_type": 1 00:14:18.305 }, 00:14:18.305 { 00:14:18.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.305 "dma_device_type": 2 00:14:18.305 } 00:14:18.305 ], 00:14:18.306 "driver_specific": { 00:14:18.306 "raid": { 00:14:18.306 "uuid": "6f9c76d7-e90f-43c9-8e72-bdc22a287617", 00:14:18.306 "strip_size_kb": 0, 00:14:18.306 "state": "online", 00:14:18.306 "raid_level": "raid1", 00:14:18.306 "superblock": false, 00:14:18.306 "num_base_bdevs": 4, 00:14:18.306 "num_base_bdevs_discovered": 4, 00:14:18.306 "num_base_bdevs_operational": 4, 00:14:18.306 "base_bdevs_list": [ 00:14:18.306 { 00:14:18.306 "name": "NewBaseBdev", 00:14:18.306 "uuid": "48bf6138-2661-4304-93de-ba2585e2a1a8", 00:14:18.306 "is_configured": true, 00:14:18.306 "data_offset": 0, 00:14:18.306 "data_size": 65536 00:14:18.306 }, 00:14:18.306 { 00:14:18.306 "name": "BaseBdev2", 00:14:18.306 "uuid": "6a8d1aa9-b462-42e5-8567-0ab57daa4ae7", 00:14:18.306 "is_configured": true, 00:14:18.306 "data_offset": 0, 00:14:18.306 "data_size": 65536 00:14:18.306 }, 00:14:18.306 { 00:14:18.306 "name": "BaseBdev3", 00:14:18.306 "uuid": "f9499b3d-fd21-4d8d-b265-48c1274622b8", 00:14:18.306 "is_configured": true, 00:14:18.306 "data_offset": 0, 00:14:18.306 "data_size": 65536 00:14:18.306 }, 00:14:18.306 { 00:14:18.306 "name": "BaseBdev4", 00:14:18.306 "uuid": "f55bbfeb-de47-4c23-b6cd-8460d72e732c", 00:14:18.306 "is_configured": true, 00:14:18.306 "data_offset": 0, 00:14:18.306 "data_size": 65536 00:14:18.306 } 00:14:18.306 ] 00:14:18.306 } 00:14:18.306 } 00:14:18.306 }' 00:14:18.306 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:18.306 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:18.306 BaseBdev2 00:14:18.306 BaseBdev3 00:14:18.306 BaseBdev4' 00:14:18.306 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.306 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:18.306 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.306 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:18.306 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.306 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.306 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.306 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.564 [2024-11-04 14:40:17.615014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:18.564 [2024-11-04 14:40:17.615053] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.564 [2024-11-04 14:40:17.615160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.564 [2024-11-04 14:40:17.615521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.564 [2024-11-04 14:40:17.615543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73312 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73312 ']' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73312 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73312 00:14:18.564 killing process with pid 73312 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73312' 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73312 00:14:18.564 [2024-11-04 14:40:17.652187] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.564 14:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73312 00:14:19.136 [2024-11-04 14:40:18.007479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:20.071 00:14:20.071 real 0m13.067s 00:14:20.071 user 0m21.694s 00:14:20.071 sys 0m1.816s 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:20.071 ************************************ 00:14:20.071 END TEST raid_state_function_test 00:14:20.071 ************************************ 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.071 14:40:19 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:14:20.071 14:40:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:20.071 14:40:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:20.071 14:40:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.071 ************************************ 00:14:20.071 START TEST raid_state_function_test_sb 00:14:20.071 ************************************ 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73999 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:20.071 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73999' 00:14:20.071 Process raid pid: 73999 00:14:20.072 14:40:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73999 00:14:20.072 14:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73999 ']' 00:14:20.072 14:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.072 14:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:20.072 14:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.072 14:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:20.072 14:40:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.377 [2024-11-04 14:40:19.213713] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:14:20.377 [2024-11-04 14:40:19.214171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.377 [2024-11-04 14:40:19.405698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.635 [2024-11-04 14:40:19.537650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.635 [2024-11-04 14:40:19.747134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.635 [2024-11-04 14:40:19.747188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.202 [2024-11-04 14:40:20.258434] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.202 [2024-11-04 14:40:20.258528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.202 [2024-11-04 14:40:20.258556] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.202 [2024-11-04 14:40:20.258587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.202 [2024-11-04 14:40:20.258597] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.202 [2024-11-04 14:40:20.258609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:21.202 [2024-11-04 14:40:20.258618] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:21.202 [2024-11-04 14:40:20.258631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.202 "name": "Existed_Raid", 00:14:21.202 "uuid": "e4c05ab3-d879-4e9a-b28a-7e3d983131d5", 00:14:21.202 "strip_size_kb": 0, 00:14:21.202 "state": "configuring", 00:14:21.202 "raid_level": "raid1", 00:14:21.202 "superblock": true, 00:14:21.202 "num_base_bdevs": 4, 00:14:21.202 "num_base_bdevs_discovered": 0, 00:14:21.202 "num_base_bdevs_operational": 4, 00:14:21.202 "base_bdevs_list": [ 00:14:21.202 { 00:14:21.202 "name": "BaseBdev1", 00:14:21.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.202 "is_configured": false, 00:14:21.202 "data_offset": 0, 00:14:21.202 "data_size": 0 00:14:21.202 }, 00:14:21.202 { 00:14:21.202 "name": "BaseBdev2", 00:14:21.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.202 "is_configured": false, 00:14:21.202 "data_offset": 0, 00:14:21.202 "data_size": 0 00:14:21.202 }, 00:14:21.202 { 00:14:21.202 "name": "BaseBdev3", 00:14:21.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.202 "is_configured": false, 00:14:21.202 "data_offset": 0, 00:14:21.202 "data_size": 0 00:14:21.202 }, 00:14:21.202 { 00:14:21.202 "name": "BaseBdev4", 00:14:21.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.202 "is_configured": false, 00:14:21.202 "data_offset": 0, 00:14:21.202 "data_size": 0 00:14:21.202 } 00:14:21.202 ] 00:14:21.202 }' 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.202 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.770 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:21.770 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.770 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.771 [2024-11-04 14:40:20.774515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.771 [2024-11-04 14:40:20.774757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.771 [2024-11-04 14:40:20.782504] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.771 [2024-11-04 14:40:20.782584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.771 [2024-11-04 14:40:20.782598] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.771 [2024-11-04 14:40:20.782612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.771 [2024-11-04 14:40:20.782620] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.771 [2024-11-04 14:40:20.782632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:21.771 [2024-11-04 14:40:20.782640] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:21.771 [2024-11-04 14:40:20.782651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.771 [2024-11-04 14:40:20.828013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.771 BaseBdev1 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.771 [ 00:14:21.771 { 00:14:21.771 "name": "BaseBdev1", 00:14:21.771 "aliases": [ 00:14:21.771 "c16e751d-aed1-4802-928a-69f04b3876c2" 00:14:21.771 ], 00:14:21.771 "product_name": "Malloc disk", 00:14:21.771 "block_size": 512, 00:14:21.771 "num_blocks": 65536, 00:14:21.771 "uuid": "c16e751d-aed1-4802-928a-69f04b3876c2", 00:14:21.771 "assigned_rate_limits": { 00:14:21.771 "rw_ios_per_sec": 0, 00:14:21.771 "rw_mbytes_per_sec": 0, 00:14:21.771 "r_mbytes_per_sec": 0, 00:14:21.771 "w_mbytes_per_sec": 0 00:14:21.771 }, 00:14:21.771 "claimed": true, 00:14:21.771 "claim_type": "exclusive_write", 00:14:21.771 "zoned": false, 00:14:21.771 "supported_io_types": { 00:14:21.771 "read": true, 00:14:21.771 "write": true, 00:14:21.771 "unmap": true, 00:14:21.771 "flush": true, 00:14:21.771 "reset": true, 00:14:21.771 "nvme_admin": false, 00:14:21.771 "nvme_io": false, 00:14:21.771 "nvme_io_md": false, 00:14:21.771 "write_zeroes": true, 00:14:21.771 "zcopy": true, 00:14:21.771 "get_zone_info": false, 00:14:21.771 "zone_management": false, 00:14:21.771 "zone_append": false, 00:14:21.771 "compare": false, 00:14:21.771 "compare_and_write": false, 00:14:21.771 "abort": true, 00:14:21.771 "seek_hole": false, 00:14:21.771 "seek_data": false, 00:14:21.771 "copy": true, 00:14:21.771 "nvme_iov_md": false 00:14:21.771 }, 00:14:21.771 "memory_domains": [ 00:14:21.771 { 00:14:21.771 "dma_device_id": "system", 00:14:21.771 "dma_device_type": 1 00:14:21.771 }, 00:14:21.771 { 00:14:21.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.771 "dma_device_type": 2 00:14:21.771 } 00:14:21.771 ], 00:14:21.771 "driver_specific": {} 00:14:21.771 } 00:14:21.771 ] 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.771 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.030 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.030 "name": "Existed_Raid", 00:14:22.030 "uuid": "7cf1de7d-4be2-49e2-9f68-68dc9c58b650", 00:14:22.030 "strip_size_kb": 0, 00:14:22.030 "state": "configuring", 00:14:22.030 "raid_level": "raid1", 00:14:22.030 "superblock": true, 00:14:22.030 "num_base_bdevs": 4, 00:14:22.030 "num_base_bdevs_discovered": 1, 00:14:22.030 "num_base_bdevs_operational": 4, 00:14:22.030 "base_bdevs_list": [ 00:14:22.030 { 00:14:22.030 "name": "BaseBdev1", 00:14:22.030 "uuid": "c16e751d-aed1-4802-928a-69f04b3876c2", 00:14:22.030 "is_configured": true, 00:14:22.030 "data_offset": 2048, 00:14:22.030 "data_size": 63488 00:14:22.030 }, 00:14:22.030 { 00:14:22.030 "name": "BaseBdev2", 00:14:22.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.030 "is_configured": false, 00:14:22.030 "data_offset": 0, 00:14:22.030 "data_size": 0 00:14:22.030 }, 00:14:22.030 { 00:14:22.030 "name": "BaseBdev3", 00:14:22.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.030 "is_configured": false, 00:14:22.030 "data_offset": 0, 00:14:22.030 "data_size": 0 00:14:22.030 }, 00:14:22.030 { 00:14:22.030 "name": "BaseBdev4", 00:14:22.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.030 "is_configured": false, 00:14:22.030 "data_offset": 0, 00:14:22.030 "data_size": 0 00:14:22.030 } 00:14:22.030 ] 00:14:22.030 }' 00:14:22.030 14:40:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.030 14:40:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.289 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:22.289 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.289 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.289 [2024-11-04 14:40:21.396268] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.289 [2024-11-04 14:40:21.396331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:22.289 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.289 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:22.289 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.289 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.289 [2024-11-04 14:40:21.404340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.289 [2024-11-04 14:40:21.406816] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.289 [2024-11-04 14:40:21.406872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.289 [2024-11-04 14:40:21.406889] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:22.289 [2024-11-04 14:40:21.406906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:22.289 [2024-11-04 14:40:21.406915] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:22.289 [2024-11-04 14:40:21.406948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.548 "name": "Existed_Raid", 00:14:22.548 "uuid": "209a7a6f-a806-4e6b-b933-c60960e7753a", 00:14:22.548 "strip_size_kb": 0, 00:14:22.548 "state": "configuring", 00:14:22.548 "raid_level": "raid1", 00:14:22.548 "superblock": true, 00:14:22.548 "num_base_bdevs": 4, 00:14:22.548 "num_base_bdevs_discovered": 1, 00:14:22.548 "num_base_bdevs_operational": 4, 00:14:22.548 "base_bdevs_list": [ 00:14:22.548 { 00:14:22.548 "name": "BaseBdev1", 00:14:22.548 "uuid": "c16e751d-aed1-4802-928a-69f04b3876c2", 00:14:22.548 "is_configured": true, 00:14:22.548 "data_offset": 2048, 00:14:22.548 "data_size": 63488 00:14:22.548 }, 00:14:22.548 { 00:14:22.548 "name": "BaseBdev2", 00:14:22.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.548 "is_configured": false, 00:14:22.548 "data_offset": 0, 00:14:22.548 "data_size": 0 00:14:22.548 }, 00:14:22.548 { 00:14:22.548 "name": "BaseBdev3", 00:14:22.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.548 "is_configured": false, 00:14:22.548 "data_offset": 0, 00:14:22.548 "data_size": 0 00:14:22.548 }, 00:14:22.548 { 00:14:22.548 "name": "BaseBdev4", 00:14:22.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.548 "is_configured": false, 00:14:22.548 "data_offset": 0, 00:14:22.548 "data_size": 0 00:14:22.548 } 00:14:22.548 ] 00:14:22.548 }' 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.548 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.807 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.807 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.807 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.066 [2024-11-04 14:40:21.964632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.066 BaseBdev2 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.066 [ 00:14:23.066 { 00:14:23.066 "name": "BaseBdev2", 00:14:23.066 "aliases": [ 00:14:23.066 "6b0bc4d5-affc-4d5c-968b-105ff3c0ba0e" 00:14:23.066 ], 00:14:23.066 "product_name": "Malloc disk", 00:14:23.066 "block_size": 512, 00:14:23.066 "num_blocks": 65536, 00:14:23.066 "uuid": "6b0bc4d5-affc-4d5c-968b-105ff3c0ba0e", 00:14:23.066 "assigned_rate_limits": { 00:14:23.066 "rw_ios_per_sec": 0, 00:14:23.066 "rw_mbytes_per_sec": 0, 00:14:23.066 "r_mbytes_per_sec": 0, 00:14:23.066 "w_mbytes_per_sec": 0 00:14:23.066 }, 00:14:23.066 "claimed": true, 00:14:23.066 "claim_type": "exclusive_write", 00:14:23.066 "zoned": false, 00:14:23.066 "supported_io_types": { 00:14:23.066 "read": true, 00:14:23.066 "write": true, 00:14:23.066 "unmap": true, 00:14:23.066 "flush": true, 00:14:23.066 "reset": true, 00:14:23.066 "nvme_admin": false, 00:14:23.066 "nvme_io": false, 00:14:23.066 "nvme_io_md": false, 00:14:23.066 "write_zeroes": true, 00:14:23.066 "zcopy": true, 00:14:23.066 "get_zone_info": false, 00:14:23.066 "zone_management": false, 00:14:23.066 "zone_append": false, 00:14:23.066 "compare": false, 00:14:23.066 "compare_and_write": false, 00:14:23.066 "abort": true, 00:14:23.066 "seek_hole": false, 00:14:23.066 "seek_data": false, 00:14:23.066 "copy": true, 00:14:23.066 "nvme_iov_md": false 00:14:23.066 }, 00:14:23.066 "memory_domains": [ 00:14:23.066 { 00:14:23.066 "dma_device_id": "system", 00:14:23.066 "dma_device_type": 1 00:14:23.066 }, 00:14:23.066 { 00:14:23.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.066 "dma_device_type": 2 00:14:23.066 } 00:14:23.066 ], 00:14:23.066 "driver_specific": {} 00:14:23.066 } 00:14:23.066 ] 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.066 14:40:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.066 "name": "Existed_Raid", 00:14:23.066 "uuid": "209a7a6f-a806-4e6b-b933-c60960e7753a", 00:14:23.066 "strip_size_kb": 0, 00:14:23.066 "state": "configuring", 00:14:23.066 "raid_level": "raid1", 00:14:23.066 "superblock": true, 00:14:23.066 "num_base_bdevs": 4, 00:14:23.066 "num_base_bdevs_discovered": 2, 00:14:23.066 "num_base_bdevs_operational": 4, 00:14:23.066 "base_bdevs_list": [ 00:14:23.066 { 00:14:23.066 "name": "BaseBdev1", 00:14:23.066 "uuid": "c16e751d-aed1-4802-928a-69f04b3876c2", 00:14:23.066 "is_configured": true, 00:14:23.066 "data_offset": 2048, 00:14:23.066 "data_size": 63488 00:14:23.066 }, 00:14:23.066 { 00:14:23.066 "name": "BaseBdev2", 00:14:23.066 "uuid": "6b0bc4d5-affc-4d5c-968b-105ff3c0ba0e", 00:14:23.066 "is_configured": true, 00:14:23.066 "data_offset": 2048, 00:14:23.066 "data_size": 63488 00:14:23.066 }, 00:14:23.066 { 00:14:23.066 "name": "BaseBdev3", 00:14:23.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.066 "is_configured": false, 00:14:23.066 "data_offset": 0, 00:14:23.066 "data_size": 0 00:14:23.066 }, 00:14:23.066 { 00:14:23.066 "name": "BaseBdev4", 00:14:23.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.066 "is_configured": false, 00:14:23.066 "data_offset": 0, 00:14:23.066 "data_size": 0 00:14:23.066 } 00:14:23.066 ] 00:14:23.066 }' 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.066 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.635 [2024-11-04 14:40:22.599035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.635 BaseBdev3 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.635 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.636 [ 00:14:23.636 { 00:14:23.636 "name": "BaseBdev3", 00:14:23.636 "aliases": [ 00:14:23.636 "ce77d2a2-6e9d-4843-8c0c-5839e36a77b6" 00:14:23.636 ], 00:14:23.636 "product_name": "Malloc disk", 00:14:23.636 "block_size": 512, 00:14:23.636 "num_blocks": 65536, 00:14:23.636 "uuid": "ce77d2a2-6e9d-4843-8c0c-5839e36a77b6", 00:14:23.636 "assigned_rate_limits": { 00:14:23.636 "rw_ios_per_sec": 0, 00:14:23.636 "rw_mbytes_per_sec": 0, 00:14:23.636 "r_mbytes_per_sec": 0, 00:14:23.636 "w_mbytes_per_sec": 0 00:14:23.636 }, 00:14:23.636 "claimed": true, 00:14:23.636 "claim_type": "exclusive_write", 00:14:23.636 "zoned": false, 00:14:23.636 "supported_io_types": { 00:14:23.636 "read": true, 00:14:23.636 "write": true, 00:14:23.636 "unmap": true, 00:14:23.636 "flush": true, 00:14:23.636 "reset": true, 00:14:23.636 "nvme_admin": false, 00:14:23.636 "nvme_io": false, 00:14:23.636 "nvme_io_md": false, 00:14:23.636 "write_zeroes": true, 00:14:23.636 "zcopy": true, 00:14:23.636 "get_zone_info": false, 00:14:23.636 "zone_management": false, 00:14:23.636 "zone_append": false, 00:14:23.636 "compare": false, 00:14:23.636 "compare_and_write": false, 00:14:23.636 "abort": true, 00:14:23.636 "seek_hole": false, 00:14:23.636 "seek_data": false, 00:14:23.636 "copy": true, 00:14:23.636 "nvme_iov_md": false 00:14:23.636 }, 00:14:23.636 "memory_domains": [ 00:14:23.636 { 00:14:23.636 "dma_device_id": "system", 00:14:23.636 "dma_device_type": 1 00:14:23.636 }, 00:14:23.636 { 00:14:23.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.636 "dma_device_type": 2 00:14:23.636 } 00:14:23.636 ], 00:14:23.636 "driver_specific": {} 00:14:23.636 } 00:14:23.636 ] 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.636 "name": "Existed_Raid", 00:14:23.636 "uuid": "209a7a6f-a806-4e6b-b933-c60960e7753a", 00:14:23.636 "strip_size_kb": 0, 00:14:23.636 "state": "configuring", 00:14:23.636 "raid_level": "raid1", 00:14:23.636 "superblock": true, 00:14:23.636 "num_base_bdevs": 4, 00:14:23.636 "num_base_bdevs_discovered": 3, 00:14:23.636 "num_base_bdevs_operational": 4, 00:14:23.636 "base_bdevs_list": [ 00:14:23.636 { 00:14:23.636 "name": "BaseBdev1", 00:14:23.636 "uuid": "c16e751d-aed1-4802-928a-69f04b3876c2", 00:14:23.636 "is_configured": true, 00:14:23.636 "data_offset": 2048, 00:14:23.636 "data_size": 63488 00:14:23.636 }, 00:14:23.636 { 00:14:23.636 "name": "BaseBdev2", 00:14:23.636 "uuid": "6b0bc4d5-affc-4d5c-968b-105ff3c0ba0e", 00:14:23.636 "is_configured": true, 00:14:23.636 "data_offset": 2048, 00:14:23.636 "data_size": 63488 00:14:23.636 }, 00:14:23.636 { 00:14:23.636 "name": "BaseBdev3", 00:14:23.636 "uuid": "ce77d2a2-6e9d-4843-8c0c-5839e36a77b6", 00:14:23.636 "is_configured": true, 00:14:23.636 "data_offset": 2048, 00:14:23.636 "data_size": 63488 00:14:23.636 }, 00:14:23.636 { 00:14:23.636 "name": "BaseBdev4", 00:14:23.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.636 "is_configured": false, 00:14:23.636 "data_offset": 0, 00:14:23.636 "data_size": 0 00:14:23.636 } 00:14:23.636 ] 00:14:23.636 }' 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.636 14:40:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.203 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:24.203 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.203 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.203 [2024-11-04 14:40:23.251283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:24.203 [2024-11-04 14:40:23.251644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:24.203 [2024-11-04 14:40:23.251664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:24.203 BaseBdev4 00:14:24.203 [2024-11-04 14:40:23.252042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:24.203 [2024-11-04 14:40:23.252249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:24.203 [2024-11-04 14:40:23.252272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:24.203 [2024-11-04 14:40:23.252450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.203 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.204 [ 00:14:24.204 { 00:14:24.204 "name": "BaseBdev4", 00:14:24.204 "aliases": [ 00:14:24.204 "674f7915-f452-4b54-b585-4203222b96cc" 00:14:24.204 ], 00:14:24.204 "product_name": "Malloc disk", 00:14:24.204 "block_size": 512, 00:14:24.204 "num_blocks": 65536, 00:14:24.204 "uuid": "674f7915-f452-4b54-b585-4203222b96cc", 00:14:24.204 "assigned_rate_limits": { 00:14:24.204 "rw_ios_per_sec": 0, 00:14:24.204 "rw_mbytes_per_sec": 0, 00:14:24.204 "r_mbytes_per_sec": 0, 00:14:24.204 "w_mbytes_per_sec": 0 00:14:24.204 }, 00:14:24.204 "claimed": true, 00:14:24.204 "claim_type": "exclusive_write", 00:14:24.204 "zoned": false, 00:14:24.204 "supported_io_types": { 00:14:24.204 "read": true, 00:14:24.204 "write": true, 00:14:24.204 "unmap": true, 00:14:24.204 "flush": true, 00:14:24.204 "reset": true, 00:14:24.204 "nvme_admin": false, 00:14:24.204 "nvme_io": false, 00:14:24.204 "nvme_io_md": false, 00:14:24.204 "write_zeroes": true, 00:14:24.204 "zcopy": true, 00:14:24.204 "get_zone_info": false, 00:14:24.204 "zone_management": false, 00:14:24.204 "zone_append": false, 00:14:24.204 "compare": false, 00:14:24.204 "compare_and_write": false, 00:14:24.204 "abort": true, 00:14:24.204 "seek_hole": false, 00:14:24.204 "seek_data": false, 00:14:24.204 "copy": true, 00:14:24.204 "nvme_iov_md": false 00:14:24.204 }, 00:14:24.204 "memory_domains": [ 00:14:24.204 { 00:14:24.204 "dma_device_id": "system", 00:14:24.204 "dma_device_type": 1 00:14:24.204 }, 00:14:24.204 { 00:14:24.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.204 "dma_device_type": 2 00:14:24.204 } 00:14:24.204 ], 00:14:24.204 "driver_specific": {} 00:14:24.204 } 00:14:24.204 ] 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.204 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.464 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.464 "name": "Existed_Raid", 00:14:24.464 "uuid": "209a7a6f-a806-4e6b-b933-c60960e7753a", 00:14:24.464 "strip_size_kb": 0, 00:14:24.464 "state": "online", 00:14:24.464 "raid_level": "raid1", 00:14:24.464 "superblock": true, 00:14:24.464 "num_base_bdevs": 4, 00:14:24.464 "num_base_bdevs_discovered": 4, 00:14:24.464 "num_base_bdevs_operational": 4, 00:14:24.464 "base_bdevs_list": [ 00:14:24.464 { 00:14:24.464 "name": "BaseBdev1", 00:14:24.464 "uuid": "c16e751d-aed1-4802-928a-69f04b3876c2", 00:14:24.464 "is_configured": true, 00:14:24.464 "data_offset": 2048, 00:14:24.464 "data_size": 63488 00:14:24.464 }, 00:14:24.464 { 00:14:24.464 "name": "BaseBdev2", 00:14:24.464 "uuid": "6b0bc4d5-affc-4d5c-968b-105ff3c0ba0e", 00:14:24.464 "is_configured": true, 00:14:24.464 "data_offset": 2048, 00:14:24.464 "data_size": 63488 00:14:24.464 }, 00:14:24.464 { 00:14:24.464 "name": "BaseBdev3", 00:14:24.464 "uuid": "ce77d2a2-6e9d-4843-8c0c-5839e36a77b6", 00:14:24.464 "is_configured": true, 00:14:24.464 "data_offset": 2048, 00:14:24.464 "data_size": 63488 00:14:24.464 }, 00:14:24.464 { 00:14:24.464 "name": "BaseBdev4", 00:14:24.464 "uuid": "674f7915-f452-4b54-b585-4203222b96cc", 00:14:24.464 "is_configured": true, 00:14:24.464 "data_offset": 2048, 00:14:24.464 "data_size": 63488 00:14:24.464 } 00:14:24.464 ] 00:14:24.464 }' 00:14:24.464 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.464 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.723 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:24.723 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:24.723 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:24.723 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:24.723 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:24.723 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:24.723 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:24.723 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.723 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:24.723 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.982 [2024-11-04 14:40:23.847959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.982 14:40:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.982 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:24.982 "name": "Existed_Raid", 00:14:24.982 "aliases": [ 00:14:24.982 "209a7a6f-a806-4e6b-b933-c60960e7753a" 00:14:24.982 ], 00:14:24.982 "product_name": "Raid Volume", 00:14:24.982 "block_size": 512, 00:14:24.982 "num_blocks": 63488, 00:14:24.982 "uuid": "209a7a6f-a806-4e6b-b933-c60960e7753a", 00:14:24.982 "assigned_rate_limits": { 00:14:24.982 "rw_ios_per_sec": 0, 00:14:24.982 "rw_mbytes_per_sec": 0, 00:14:24.982 "r_mbytes_per_sec": 0, 00:14:24.982 "w_mbytes_per_sec": 0 00:14:24.982 }, 00:14:24.982 "claimed": false, 00:14:24.982 "zoned": false, 00:14:24.982 "supported_io_types": { 00:14:24.982 "read": true, 00:14:24.982 "write": true, 00:14:24.982 "unmap": false, 00:14:24.982 "flush": false, 00:14:24.982 "reset": true, 00:14:24.982 "nvme_admin": false, 00:14:24.982 "nvme_io": false, 00:14:24.982 "nvme_io_md": false, 00:14:24.982 "write_zeroes": true, 00:14:24.982 "zcopy": false, 00:14:24.982 "get_zone_info": false, 00:14:24.982 "zone_management": false, 00:14:24.982 "zone_append": false, 00:14:24.982 "compare": false, 00:14:24.982 "compare_and_write": false, 00:14:24.982 "abort": false, 00:14:24.982 "seek_hole": false, 00:14:24.982 "seek_data": false, 00:14:24.982 "copy": false, 00:14:24.982 "nvme_iov_md": false 00:14:24.982 }, 00:14:24.982 "memory_domains": [ 00:14:24.982 { 00:14:24.982 "dma_device_id": "system", 00:14:24.982 "dma_device_type": 1 00:14:24.982 }, 00:14:24.982 { 00:14:24.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.982 "dma_device_type": 2 00:14:24.982 }, 00:14:24.982 { 00:14:24.982 "dma_device_id": "system", 00:14:24.982 "dma_device_type": 1 00:14:24.982 }, 00:14:24.982 { 00:14:24.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.982 "dma_device_type": 2 00:14:24.982 }, 00:14:24.982 { 00:14:24.982 "dma_device_id": "system", 00:14:24.982 "dma_device_type": 1 00:14:24.982 }, 00:14:24.982 { 00:14:24.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.982 "dma_device_type": 2 00:14:24.982 }, 00:14:24.982 { 00:14:24.982 "dma_device_id": "system", 00:14:24.982 "dma_device_type": 1 00:14:24.982 }, 00:14:24.982 { 00:14:24.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.982 "dma_device_type": 2 00:14:24.982 } 00:14:24.982 ], 00:14:24.982 "driver_specific": { 00:14:24.982 "raid": { 00:14:24.982 "uuid": "209a7a6f-a806-4e6b-b933-c60960e7753a", 00:14:24.982 "strip_size_kb": 0, 00:14:24.982 "state": "online", 00:14:24.982 "raid_level": "raid1", 00:14:24.982 "superblock": true, 00:14:24.982 "num_base_bdevs": 4, 00:14:24.982 "num_base_bdevs_discovered": 4, 00:14:24.982 "num_base_bdevs_operational": 4, 00:14:24.982 "base_bdevs_list": [ 00:14:24.982 { 00:14:24.982 "name": "BaseBdev1", 00:14:24.982 "uuid": "c16e751d-aed1-4802-928a-69f04b3876c2", 00:14:24.982 "is_configured": true, 00:14:24.982 "data_offset": 2048, 00:14:24.982 "data_size": 63488 00:14:24.982 }, 00:14:24.982 { 00:14:24.982 "name": "BaseBdev2", 00:14:24.982 "uuid": "6b0bc4d5-affc-4d5c-968b-105ff3c0ba0e", 00:14:24.982 "is_configured": true, 00:14:24.982 "data_offset": 2048, 00:14:24.982 "data_size": 63488 00:14:24.982 }, 00:14:24.982 { 00:14:24.982 "name": "BaseBdev3", 00:14:24.982 "uuid": "ce77d2a2-6e9d-4843-8c0c-5839e36a77b6", 00:14:24.982 "is_configured": true, 00:14:24.982 "data_offset": 2048, 00:14:24.982 "data_size": 63488 00:14:24.982 }, 00:14:24.982 { 00:14:24.982 "name": "BaseBdev4", 00:14:24.982 "uuid": "674f7915-f452-4b54-b585-4203222b96cc", 00:14:24.982 "is_configured": true, 00:14:24.982 "data_offset": 2048, 00:14:24.982 "data_size": 63488 00:14:24.982 } 00:14:24.982 ] 00:14:24.982 } 00:14:24.982 } 00:14:24.982 }' 00:14:24.982 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:24.982 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:24.982 BaseBdev2 00:14:24.982 BaseBdev3 00:14:24.982 BaseBdev4' 00:14:24.982 14:40:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.982 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.241 [2024-11-04 14:40:24.247787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.241 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.500 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.500 "name": "Existed_Raid", 00:14:25.500 "uuid": "209a7a6f-a806-4e6b-b933-c60960e7753a", 00:14:25.500 "strip_size_kb": 0, 00:14:25.500 "state": "online", 00:14:25.500 "raid_level": "raid1", 00:14:25.500 "superblock": true, 00:14:25.500 "num_base_bdevs": 4, 00:14:25.500 "num_base_bdevs_discovered": 3, 00:14:25.500 "num_base_bdevs_operational": 3, 00:14:25.500 "base_bdevs_list": [ 00:14:25.500 { 00:14:25.500 "name": null, 00:14:25.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.500 "is_configured": false, 00:14:25.500 "data_offset": 0, 00:14:25.500 "data_size": 63488 00:14:25.500 }, 00:14:25.500 { 00:14:25.500 "name": "BaseBdev2", 00:14:25.500 "uuid": "6b0bc4d5-affc-4d5c-968b-105ff3c0ba0e", 00:14:25.500 "is_configured": true, 00:14:25.500 "data_offset": 2048, 00:14:25.500 "data_size": 63488 00:14:25.500 }, 00:14:25.500 { 00:14:25.500 "name": "BaseBdev3", 00:14:25.500 "uuid": "ce77d2a2-6e9d-4843-8c0c-5839e36a77b6", 00:14:25.500 "is_configured": true, 00:14:25.500 "data_offset": 2048, 00:14:25.500 "data_size": 63488 00:14:25.500 }, 00:14:25.500 { 00:14:25.500 "name": "BaseBdev4", 00:14:25.500 "uuid": "674f7915-f452-4b54-b585-4203222b96cc", 00:14:25.500 "is_configured": true, 00:14:25.500 "data_offset": 2048, 00:14:25.500 "data_size": 63488 00:14:25.500 } 00:14:25.500 ] 00:14:25.500 }' 00:14:25.500 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.500 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.067 14:40:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.067 [2024-11-04 14:40:24.965776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:26.067 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.067 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:26.067 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:26.067 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.067 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.067 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.068 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:26.068 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.068 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:26.068 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:26.068 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:26.068 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.068 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.068 [2024-11-04 14:40:25.116853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.327 [2024-11-04 14:40:25.263952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:26.327 [2024-11-04 14:40:25.264106] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.327 [2024-11-04 14:40:25.349491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.327 [2024-11-04 14:40:25.349564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.327 [2024-11-04 14:40:25.349584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.327 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 BaseBdev2 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 [ 00:14:26.587 { 00:14:26.587 "name": "BaseBdev2", 00:14:26.587 "aliases": [ 00:14:26.587 "490958e1-12fe-4e03-a3cd-75e641d12759" 00:14:26.587 ], 00:14:26.587 "product_name": "Malloc disk", 00:14:26.587 "block_size": 512, 00:14:26.587 "num_blocks": 65536, 00:14:26.587 "uuid": "490958e1-12fe-4e03-a3cd-75e641d12759", 00:14:26.587 "assigned_rate_limits": { 00:14:26.587 "rw_ios_per_sec": 0, 00:14:26.587 "rw_mbytes_per_sec": 0, 00:14:26.587 "r_mbytes_per_sec": 0, 00:14:26.587 "w_mbytes_per_sec": 0 00:14:26.587 }, 00:14:26.587 "claimed": false, 00:14:26.587 "zoned": false, 00:14:26.587 "supported_io_types": { 00:14:26.587 "read": true, 00:14:26.587 "write": true, 00:14:26.587 "unmap": true, 00:14:26.587 "flush": true, 00:14:26.587 "reset": true, 00:14:26.587 "nvme_admin": false, 00:14:26.587 "nvme_io": false, 00:14:26.587 "nvme_io_md": false, 00:14:26.587 "write_zeroes": true, 00:14:26.587 "zcopy": true, 00:14:26.587 "get_zone_info": false, 00:14:26.587 "zone_management": false, 00:14:26.587 "zone_append": false, 00:14:26.587 "compare": false, 00:14:26.587 "compare_and_write": false, 00:14:26.587 "abort": true, 00:14:26.587 "seek_hole": false, 00:14:26.587 "seek_data": false, 00:14:26.587 "copy": true, 00:14:26.587 "nvme_iov_md": false 00:14:26.587 }, 00:14:26.587 "memory_domains": [ 00:14:26.587 { 00:14:26.587 "dma_device_id": "system", 00:14:26.587 "dma_device_type": 1 00:14:26.587 }, 00:14:26.587 { 00:14:26.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.587 "dma_device_type": 2 00:14:26.587 } 00:14:26.587 ], 00:14:26.587 "driver_specific": {} 00:14:26.587 } 00:14:26.587 ] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 BaseBdev3 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 [ 00:14:26.587 { 00:14:26.587 "name": "BaseBdev3", 00:14:26.587 "aliases": [ 00:14:26.587 "87063b12-33a5-4e3a-b6eb-08926ac51204" 00:14:26.587 ], 00:14:26.587 "product_name": "Malloc disk", 00:14:26.587 "block_size": 512, 00:14:26.587 "num_blocks": 65536, 00:14:26.587 "uuid": "87063b12-33a5-4e3a-b6eb-08926ac51204", 00:14:26.587 "assigned_rate_limits": { 00:14:26.587 "rw_ios_per_sec": 0, 00:14:26.587 "rw_mbytes_per_sec": 0, 00:14:26.587 "r_mbytes_per_sec": 0, 00:14:26.587 "w_mbytes_per_sec": 0 00:14:26.587 }, 00:14:26.587 "claimed": false, 00:14:26.587 "zoned": false, 00:14:26.587 "supported_io_types": { 00:14:26.587 "read": true, 00:14:26.587 "write": true, 00:14:26.587 "unmap": true, 00:14:26.587 "flush": true, 00:14:26.587 "reset": true, 00:14:26.587 "nvme_admin": false, 00:14:26.587 "nvme_io": false, 00:14:26.587 "nvme_io_md": false, 00:14:26.587 "write_zeroes": true, 00:14:26.587 "zcopy": true, 00:14:26.587 "get_zone_info": false, 00:14:26.587 "zone_management": false, 00:14:26.587 "zone_append": false, 00:14:26.587 "compare": false, 00:14:26.587 "compare_and_write": false, 00:14:26.587 "abort": true, 00:14:26.587 "seek_hole": false, 00:14:26.587 "seek_data": false, 00:14:26.587 "copy": true, 00:14:26.587 "nvme_iov_md": false 00:14:26.587 }, 00:14:26.587 "memory_domains": [ 00:14:26.587 { 00:14:26.587 "dma_device_id": "system", 00:14:26.587 "dma_device_type": 1 00:14:26.587 }, 00:14:26.587 { 00:14:26.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.587 "dma_device_type": 2 00:14:26.587 } 00:14:26.587 ], 00:14:26.587 "driver_specific": {} 00:14:26.587 } 00:14:26.587 ] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.587 BaseBdev4 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:26.587 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.588 [ 00:14:26.588 { 00:14:26.588 "name": "BaseBdev4", 00:14:26.588 "aliases": [ 00:14:26.588 "2b87801e-a3c8-46c5-b765-2502dc7193ea" 00:14:26.588 ], 00:14:26.588 "product_name": "Malloc disk", 00:14:26.588 "block_size": 512, 00:14:26.588 "num_blocks": 65536, 00:14:26.588 "uuid": "2b87801e-a3c8-46c5-b765-2502dc7193ea", 00:14:26.588 "assigned_rate_limits": { 00:14:26.588 "rw_ios_per_sec": 0, 00:14:26.588 "rw_mbytes_per_sec": 0, 00:14:26.588 "r_mbytes_per_sec": 0, 00:14:26.588 "w_mbytes_per_sec": 0 00:14:26.588 }, 00:14:26.588 "claimed": false, 00:14:26.588 "zoned": false, 00:14:26.588 "supported_io_types": { 00:14:26.588 "read": true, 00:14:26.588 "write": true, 00:14:26.588 "unmap": true, 00:14:26.588 "flush": true, 00:14:26.588 "reset": true, 00:14:26.588 "nvme_admin": false, 00:14:26.588 "nvme_io": false, 00:14:26.588 "nvme_io_md": false, 00:14:26.588 "write_zeroes": true, 00:14:26.588 "zcopy": true, 00:14:26.588 "get_zone_info": false, 00:14:26.588 "zone_management": false, 00:14:26.588 "zone_append": false, 00:14:26.588 "compare": false, 00:14:26.588 "compare_and_write": false, 00:14:26.588 "abort": true, 00:14:26.588 "seek_hole": false, 00:14:26.588 "seek_data": false, 00:14:26.588 "copy": true, 00:14:26.588 "nvme_iov_md": false 00:14:26.588 }, 00:14:26.588 "memory_domains": [ 00:14:26.588 { 00:14:26.588 "dma_device_id": "system", 00:14:26.588 "dma_device_type": 1 00:14:26.588 }, 00:14:26.588 { 00:14:26.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.588 "dma_device_type": 2 00:14:26.588 } 00:14:26.588 ], 00:14:26.588 "driver_specific": {} 00:14:26.588 } 00:14:26.588 ] 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.588 [2024-11-04 14:40:25.644895] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.588 [2024-11-04 14:40:25.645177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.588 [2024-11-04 14:40:25.645325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:26.588 [2024-11-04 14:40:25.647907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:26.588 [2024-11-04 14:40:25.648168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.588 "name": "Existed_Raid", 00:14:26.588 "uuid": "5e626ebc-0d65-4cf1-b180-4b63775e3cc3", 00:14:26.588 "strip_size_kb": 0, 00:14:26.588 "state": "configuring", 00:14:26.588 "raid_level": "raid1", 00:14:26.588 "superblock": true, 00:14:26.588 "num_base_bdevs": 4, 00:14:26.588 "num_base_bdevs_discovered": 3, 00:14:26.588 "num_base_bdevs_operational": 4, 00:14:26.588 "base_bdevs_list": [ 00:14:26.588 { 00:14:26.588 "name": "BaseBdev1", 00:14:26.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.588 "is_configured": false, 00:14:26.588 "data_offset": 0, 00:14:26.588 "data_size": 0 00:14:26.588 }, 00:14:26.588 { 00:14:26.588 "name": "BaseBdev2", 00:14:26.588 "uuid": "490958e1-12fe-4e03-a3cd-75e641d12759", 00:14:26.588 "is_configured": true, 00:14:26.588 "data_offset": 2048, 00:14:26.588 "data_size": 63488 00:14:26.588 }, 00:14:26.588 { 00:14:26.588 "name": "BaseBdev3", 00:14:26.588 "uuid": "87063b12-33a5-4e3a-b6eb-08926ac51204", 00:14:26.588 "is_configured": true, 00:14:26.588 "data_offset": 2048, 00:14:26.588 "data_size": 63488 00:14:26.588 }, 00:14:26.588 { 00:14:26.588 "name": "BaseBdev4", 00:14:26.588 "uuid": "2b87801e-a3c8-46c5-b765-2502dc7193ea", 00:14:26.588 "is_configured": true, 00:14:26.588 "data_offset": 2048, 00:14:26.588 "data_size": 63488 00:14:26.588 } 00:14:26.588 ] 00:14:26.588 }' 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.588 14:40:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.155 [2024-11-04 14:40:26.189065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.155 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.155 "name": "Existed_Raid", 00:14:27.155 "uuid": "5e626ebc-0d65-4cf1-b180-4b63775e3cc3", 00:14:27.155 "strip_size_kb": 0, 00:14:27.155 "state": "configuring", 00:14:27.155 "raid_level": "raid1", 00:14:27.155 "superblock": true, 00:14:27.155 "num_base_bdevs": 4, 00:14:27.155 "num_base_bdevs_discovered": 2, 00:14:27.155 "num_base_bdevs_operational": 4, 00:14:27.155 "base_bdevs_list": [ 00:14:27.155 { 00:14:27.155 "name": "BaseBdev1", 00:14:27.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.155 "is_configured": false, 00:14:27.155 "data_offset": 0, 00:14:27.155 "data_size": 0 00:14:27.155 }, 00:14:27.155 { 00:14:27.155 "name": null, 00:14:27.155 "uuid": "490958e1-12fe-4e03-a3cd-75e641d12759", 00:14:27.155 "is_configured": false, 00:14:27.155 "data_offset": 0, 00:14:27.155 "data_size": 63488 00:14:27.155 }, 00:14:27.155 { 00:14:27.155 "name": "BaseBdev3", 00:14:27.155 "uuid": "87063b12-33a5-4e3a-b6eb-08926ac51204", 00:14:27.155 "is_configured": true, 00:14:27.155 "data_offset": 2048, 00:14:27.155 "data_size": 63488 00:14:27.155 }, 00:14:27.155 { 00:14:27.155 "name": "BaseBdev4", 00:14:27.155 "uuid": "2b87801e-a3c8-46c5-b765-2502dc7193ea", 00:14:27.155 "is_configured": true, 00:14:27.155 "data_offset": 2048, 00:14:27.155 "data_size": 63488 00:14:27.155 } 00:14:27.155 ] 00:14:27.155 }' 00:14:27.156 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.156 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.733 [2024-11-04 14:40:26.829270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:27.733 BaseBdev1 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.733 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.015 [ 00:14:28.015 { 00:14:28.015 "name": "BaseBdev1", 00:14:28.015 "aliases": [ 00:14:28.015 "9eee39a0-27a1-47d8-be78-ad944093ceba" 00:14:28.015 ], 00:14:28.015 "product_name": "Malloc disk", 00:14:28.015 "block_size": 512, 00:14:28.015 "num_blocks": 65536, 00:14:28.015 "uuid": "9eee39a0-27a1-47d8-be78-ad944093ceba", 00:14:28.015 "assigned_rate_limits": { 00:14:28.015 "rw_ios_per_sec": 0, 00:14:28.015 "rw_mbytes_per_sec": 0, 00:14:28.015 "r_mbytes_per_sec": 0, 00:14:28.015 "w_mbytes_per_sec": 0 00:14:28.015 }, 00:14:28.015 "claimed": true, 00:14:28.015 "claim_type": "exclusive_write", 00:14:28.015 "zoned": false, 00:14:28.015 "supported_io_types": { 00:14:28.015 "read": true, 00:14:28.015 "write": true, 00:14:28.015 "unmap": true, 00:14:28.015 "flush": true, 00:14:28.015 "reset": true, 00:14:28.015 "nvme_admin": false, 00:14:28.015 "nvme_io": false, 00:14:28.015 "nvme_io_md": false, 00:14:28.015 "write_zeroes": true, 00:14:28.015 "zcopy": true, 00:14:28.015 "get_zone_info": false, 00:14:28.015 "zone_management": false, 00:14:28.015 "zone_append": false, 00:14:28.015 "compare": false, 00:14:28.015 "compare_and_write": false, 00:14:28.015 "abort": true, 00:14:28.015 "seek_hole": false, 00:14:28.015 "seek_data": false, 00:14:28.015 "copy": true, 00:14:28.015 "nvme_iov_md": false 00:14:28.015 }, 00:14:28.015 "memory_domains": [ 00:14:28.015 { 00:14:28.015 "dma_device_id": "system", 00:14:28.015 "dma_device_type": 1 00:14:28.015 }, 00:14:28.015 { 00:14:28.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.015 "dma_device_type": 2 00:14:28.015 } 00:14:28.015 ], 00:14:28.015 "driver_specific": {} 00:14:28.015 } 00:14:28.015 ] 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.015 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.016 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.016 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.016 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.016 "name": "Existed_Raid", 00:14:28.016 "uuid": "5e626ebc-0d65-4cf1-b180-4b63775e3cc3", 00:14:28.016 "strip_size_kb": 0, 00:14:28.016 "state": "configuring", 00:14:28.016 "raid_level": "raid1", 00:14:28.016 "superblock": true, 00:14:28.016 "num_base_bdevs": 4, 00:14:28.016 "num_base_bdevs_discovered": 3, 00:14:28.016 "num_base_bdevs_operational": 4, 00:14:28.016 "base_bdevs_list": [ 00:14:28.016 { 00:14:28.016 "name": "BaseBdev1", 00:14:28.016 "uuid": "9eee39a0-27a1-47d8-be78-ad944093ceba", 00:14:28.016 "is_configured": true, 00:14:28.016 "data_offset": 2048, 00:14:28.016 "data_size": 63488 00:14:28.016 }, 00:14:28.016 { 00:14:28.016 "name": null, 00:14:28.016 "uuid": "490958e1-12fe-4e03-a3cd-75e641d12759", 00:14:28.016 "is_configured": false, 00:14:28.016 "data_offset": 0, 00:14:28.016 "data_size": 63488 00:14:28.016 }, 00:14:28.016 { 00:14:28.016 "name": "BaseBdev3", 00:14:28.016 "uuid": "87063b12-33a5-4e3a-b6eb-08926ac51204", 00:14:28.016 "is_configured": true, 00:14:28.016 "data_offset": 2048, 00:14:28.016 "data_size": 63488 00:14:28.016 }, 00:14:28.016 { 00:14:28.016 "name": "BaseBdev4", 00:14:28.016 "uuid": "2b87801e-a3c8-46c5-b765-2502dc7193ea", 00:14:28.016 "is_configured": true, 00:14:28.016 "data_offset": 2048, 00:14:28.016 "data_size": 63488 00:14:28.016 } 00:14:28.016 ] 00:14:28.016 }' 00:14:28.016 14:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.016 14:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.583 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.584 [2024-11-04 14:40:27.465595] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.584 "name": "Existed_Raid", 00:14:28.584 "uuid": "5e626ebc-0d65-4cf1-b180-4b63775e3cc3", 00:14:28.584 "strip_size_kb": 0, 00:14:28.584 "state": "configuring", 00:14:28.584 "raid_level": "raid1", 00:14:28.584 "superblock": true, 00:14:28.584 "num_base_bdevs": 4, 00:14:28.584 "num_base_bdevs_discovered": 2, 00:14:28.584 "num_base_bdevs_operational": 4, 00:14:28.584 "base_bdevs_list": [ 00:14:28.584 { 00:14:28.584 "name": "BaseBdev1", 00:14:28.584 "uuid": "9eee39a0-27a1-47d8-be78-ad944093ceba", 00:14:28.584 "is_configured": true, 00:14:28.584 "data_offset": 2048, 00:14:28.584 "data_size": 63488 00:14:28.584 }, 00:14:28.584 { 00:14:28.584 "name": null, 00:14:28.584 "uuid": "490958e1-12fe-4e03-a3cd-75e641d12759", 00:14:28.584 "is_configured": false, 00:14:28.584 "data_offset": 0, 00:14:28.584 "data_size": 63488 00:14:28.584 }, 00:14:28.584 { 00:14:28.584 "name": null, 00:14:28.584 "uuid": "87063b12-33a5-4e3a-b6eb-08926ac51204", 00:14:28.584 "is_configured": false, 00:14:28.584 "data_offset": 0, 00:14:28.584 "data_size": 63488 00:14:28.584 }, 00:14:28.584 { 00:14:28.584 "name": "BaseBdev4", 00:14:28.584 "uuid": "2b87801e-a3c8-46c5-b765-2502dc7193ea", 00:14:28.584 "is_configured": true, 00:14:28.584 "data_offset": 2048, 00:14:28.584 "data_size": 63488 00:14:28.584 } 00:14:28.584 ] 00:14:28.584 }' 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.584 14:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.150 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.150 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:29.150 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.150 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.150 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.150 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:29.150 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:29.150 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.151 [2024-11-04 14:40:28.129787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.151 "name": "Existed_Raid", 00:14:29.151 "uuid": "5e626ebc-0d65-4cf1-b180-4b63775e3cc3", 00:14:29.151 "strip_size_kb": 0, 00:14:29.151 "state": "configuring", 00:14:29.151 "raid_level": "raid1", 00:14:29.151 "superblock": true, 00:14:29.151 "num_base_bdevs": 4, 00:14:29.151 "num_base_bdevs_discovered": 3, 00:14:29.151 "num_base_bdevs_operational": 4, 00:14:29.151 "base_bdevs_list": [ 00:14:29.151 { 00:14:29.151 "name": "BaseBdev1", 00:14:29.151 "uuid": "9eee39a0-27a1-47d8-be78-ad944093ceba", 00:14:29.151 "is_configured": true, 00:14:29.151 "data_offset": 2048, 00:14:29.151 "data_size": 63488 00:14:29.151 }, 00:14:29.151 { 00:14:29.151 "name": null, 00:14:29.151 "uuid": "490958e1-12fe-4e03-a3cd-75e641d12759", 00:14:29.151 "is_configured": false, 00:14:29.151 "data_offset": 0, 00:14:29.151 "data_size": 63488 00:14:29.151 }, 00:14:29.151 { 00:14:29.151 "name": "BaseBdev3", 00:14:29.151 "uuid": "87063b12-33a5-4e3a-b6eb-08926ac51204", 00:14:29.151 "is_configured": true, 00:14:29.151 "data_offset": 2048, 00:14:29.151 "data_size": 63488 00:14:29.151 }, 00:14:29.151 { 00:14:29.151 "name": "BaseBdev4", 00:14:29.151 "uuid": "2b87801e-a3c8-46c5-b765-2502dc7193ea", 00:14:29.151 "is_configured": true, 00:14:29.151 "data_offset": 2048, 00:14:29.151 "data_size": 63488 00:14:29.151 } 00:14:29.151 ] 00:14:29.151 }' 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.151 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.718 [2024-11-04 14:40:28.710049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.718 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.977 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.977 "name": "Existed_Raid", 00:14:29.977 "uuid": "5e626ebc-0d65-4cf1-b180-4b63775e3cc3", 00:14:29.977 "strip_size_kb": 0, 00:14:29.977 "state": "configuring", 00:14:29.977 "raid_level": "raid1", 00:14:29.977 "superblock": true, 00:14:29.977 "num_base_bdevs": 4, 00:14:29.977 "num_base_bdevs_discovered": 2, 00:14:29.977 "num_base_bdevs_operational": 4, 00:14:29.977 "base_bdevs_list": [ 00:14:29.977 { 00:14:29.977 "name": null, 00:14:29.977 "uuid": "9eee39a0-27a1-47d8-be78-ad944093ceba", 00:14:29.977 "is_configured": false, 00:14:29.977 "data_offset": 0, 00:14:29.977 "data_size": 63488 00:14:29.977 }, 00:14:29.977 { 00:14:29.977 "name": null, 00:14:29.977 "uuid": "490958e1-12fe-4e03-a3cd-75e641d12759", 00:14:29.977 "is_configured": false, 00:14:29.977 "data_offset": 0, 00:14:29.977 "data_size": 63488 00:14:29.977 }, 00:14:29.977 { 00:14:29.977 "name": "BaseBdev3", 00:14:29.977 "uuid": "87063b12-33a5-4e3a-b6eb-08926ac51204", 00:14:29.977 "is_configured": true, 00:14:29.977 "data_offset": 2048, 00:14:29.977 "data_size": 63488 00:14:29.977 }, 00:14:29.977 { 00:14:29.977 "name": "BaseBdev4", 00:14:29.977 "uuid": "2b87801e-a3c8-46c5-b765-2502dc7193ea", 00:14:29.977 "is_configured": true, 00:14:29.977 "data_offset": 2048, 00:14:29.977 "data_size": 63488 00:14:29.977 } 00:14:29.977 ] 00:14:29.977 }' 00:14:29.977 14:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.977 14:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.234 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:30.234 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.234 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.234 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.494 [2024-11-04 14:40:29.402404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.494 "name": "Existed_Raid", 00:14:30.494 "uuid": "5e626ebc-0d65-4cf1-b180-4b63775e3cc3", 00:14:30.494 "strip_size_kb": 0, 00:14:30.494 "state": "configuring", 00:14:30.494 "raid_level": "raid1", 00:14:30.494 "superblock": true, 00:14:30.494 "num_base_bdevs": 4, 00:14:30.494 "num_base_bdevs_discovered": 3, 00:14:30.494 "num_base_bdevs_operational": 4, 00:14:30.494 "base_bdevs_list": [ 00:14:30.494 { 00:14:30.494 "name": null, 00:14:30.494 "uuid": "9eee39a0-27a1-47d8-be78-ad944093ceba", 00:14:30.494 "is_configured": false, 00:14:30.494 "data_offset": 0, 00:14:30.494 "data_size": 63488 00:14:30.494 }, 00:14:30.494 { 00:14:30.494 "name": "BaseBdev2", 00:14:30.494 "uuid": "490958e1-12fe-4e03-a3cd-75e641d12759", 00:14:30.494 "is_configured": true, 00:14:30.494 "data_offset": 2048, 00:14:30.494 "data_size": 63488 00:14:30.494 }, 00:14:30.494 { 00:14:30.494 "name": "BaseBdev3", 00:14:30.494 "uuid": "87063b12-33a5-4e3a-b6eb-08926ac51204", 00:14:30.494 "is_configured": true, 00:14:30.494 "data_offset": 2048, 00:14:30.494 "data_size": 63488 00:14:30.494 }, 00:14:30.494 { 00:14:30.494 "name": "BaseBdev4", 00:14:30.494 "uuid": "2b87801e-a3c8-46c5-b765-2502dc7193ea", 00:14:30.494 "is_configured": true, 00:14:30.494 "data_offset": 2048, 00:14:30.494 "data_size": 63488 00:14:30.494 } 00:14:30.494 ] 00:14:30.494 }' 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.494 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:31.062 14:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9eee39a0-27a1-47d8-be78-ad944093ceba 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.062 [2024-11-04 14:40:30.072777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:31.062 [2024-11-04 14:40:30.073300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:31.062 [2024-11-04 14:40:30.073332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:31.062 NewBaseBdev 00:14:31.062 [2024-11-04 14:40:30.073659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:31.062 [2024-11-04 14:40:30.073870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:31.062 [2024-11-04 14:40:30.073892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:31.062 [2024-11-04 14:40:30.074083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.062 [ 00:14:31.062 { 00:14:31.062 "name": "NewBaseBdev", 00:14:31.062 "aliases": [ 00:14:31.062 "9eee39a0-27a1-47d8-be78-ad944093ceba" 00:14:31.062 ], 00:14:31.062 "product_name": "Malloc disk", 00:14:31.062 "block_size": 512, 00:14:31.062 "num_blocks": 65536, 00:14:31.062 "uuid": "9eee39a0-27a1-47d8-be78-ad944093ceba", 00:14:31.062 "assigned_rate_limits": { 00:14:31.062 "rw_ios_per_sec": 0, 00:14:31.062 "rw_mbytes_per_sec": 0, 00:14:31.062 "r_mbytes_per_sec": 0, 00:14:31.062 "w_mbytes_per_sec": 0 00:14:31.062 }, 00:14:31.062 "claimed": true, 00:14:31.062 "claim_type": "exclusive_write", 00:14:31.062 "zoned": false, 00:14:31.062 "supported_io_types": { 00:14:31.062 "read": true, 00:14:31.062 "write": true, 00:14:31.062 "unmap": true, 00:14:31.062 "flush": true, 00:14:31.062 "reset": true, 00:14:31.062 "nvme_admin": false, 00:14:31.062 "nvme_io": false, 00:14:31.062 "nvme_io_md": false, 00:14:31.062 "write_zeroes": true, 00:14:31.062 "zcopy": true, 00:14:31.062 "get_zone_info": false, 00:14:31.062 "zone_management": false, 00:14:31.062 "zone_append": false, 00:14:31.062 "compare": false, 00:14:31.062 "compare_and_write": false, 00:14:31.062 "abort": true, 00:14:31.062 "seek_hole": false, 00:14:31.062 "seek_data": false, 00:14:31.062 "copy": true, 00:14:31.062 "nvme_iov_md": false 00:14:31.062 }, 00:14:31.062 "memory_domains": [ 00:14:31.062 { 00:14:31.062 "dma_device_id": "system", 00:14:31.062 "dma_device_type": 1 00:14:31.062 }, 00:14:31.062 { 00:14:31.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.062 "dma_device_type": 2 00:14:31.062 } 00:14:31.062 ], 00:14:31.062 "driver_specific": {} 00:14:31.062 } 00:14:31.062 ] 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.062 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.063 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.063 "name": "Existed_Raid", 00:14:31.063 "uuid": "5e626ebc-0d65-4cf1-b180-4b63775e3cc3", 00:14:31.063 "strip_size_kb": 0, 00:14:31.063 "state": "online", 00:14:31.063 "raid_level": "raid1", 00:14:31.063 "superblock": true, 00:14:31.063 "num_base_bdevs": 4, 00:14:31.063 "num_base_bdevs_discovered": 4, 00:14:31.063 "num_base_bdevs_operational": 4, 00:14:31.063 "base_bdevs_list": [ 00:14:31.063 { 00:14:31.063 "name": "NewBaseBdev", 00:14:31.063 "uuid": "9eee39a0-27a1-47d8-be78-ad944093ceba", 00:14:31.063 "is_configured": true, 00:14:31.063 "data_offset": 2048, 00:14:31.063 "data_size": 63488 00:14:31.063 }, 00:14:31.063 { 00:14:31.063 "name": "BaseBdev2", 00:14:31.063 "uuid": "490958e1-12fe-4e03-a3cd-75e641d12759", 00:14:31.063 "is_configured": true, 00:14:31.063 "data_offset": 2048, 00:14:31.063 "data_size": 63488 00:14:31.063 }, 00:14:31.063 { 00:14:31.063 "name": "BaseBdev3", 00:14:31.063 "uuid": "87063b12-33a5-4e3a-b6eb-08926ac51204", 00:14:31.063 "is_configured": true, 00:14:31.063 "data_offset": 2048, 00:14:31.063 "data_size": 63488 00:14:31.063 }, 00:14:31.063 { 00:14:31.063 "name": "BaseBdev4", 00:14:31.063 "uuid": "2b87801e-a3c8-46c5-b765-2502dc7193ea", 00:14:31.063 "is_configured": true, 00:14:31.063 "data_offset": 2048, 00:14:31.063 "data_size": 63488 00:14:31.063 } 00:14:31.063 ] 00:14:31.063 }' 00:14:31.063 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.063 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:31.631 [2024-11-04 14:40:30.617442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:31.631 "name": "Existed_Raid", 00:14:31.631 "aliases": [ 00:14:31.631 "5e626ebc-0d65-4cf1-b180-4b63775e3cc3" 00:14:31.631 ], 00:14:31.631 "product_name": "Raid Volume", 00:14:31.631 "block_size": 512, 00:14:31.631 "num_blocks": 63488, 00:14:31.631 "uuid": "5e626ebc-0d65-4cf1-b180-4b63775e3cc3", 00:14:31.631 "assigned_rate_limits": { 00:14:31.631 "rw_ios_per_sec": 0, 00:14:31.631 "rw_mbytes_per_sec": 0, 00:14:31.631 "r_mbytes_per_sec": 0, 00:14:31.631 "w_mbytes_per_sec": 0 00:14:31.631 }, 00:14:31.631 "claimed": false, 00:14:31.631 "zoned": false, 00:14:31.631 "supported_io_types": { 00:14:31.631 "read": true, 00:14:31.631 "write": true, 00:14:31.631 "unmap": false, 00:14:31.631 "flush": false, 00:14:31.631 "reset": true, 00:14:31.631 "nvme_admin": false, 00:14:31.631 "nvme_io": false, 00:14:31.631 "nvme_io_md": false, 00:14:31.631 "write_zeroes": true, 00:14:31.631 "zcopy": false, 00:14:31.631 "get_zone_info": false, 00:14:31.631 "zone_management": false, 00:14:31.631 "zone_append": false, 00:14:31.631 "compare": false, 00:14:31.631 "compare_and_write": false, 00:14:31.631 "abort": false, 00:14:31.631 "seek_hole": false, 00:14:31.631 "seek_data": false, 00:14:31.631 "copy": false, 00:14:31.631 "nvme_iov_md": false 00:14:31.631 }, 00:14:31.631 "memory_domains": [ 00:14:31.631 { 00:14:31.631 "dma_device_id": "system", 00:14:31.631 "dma_device_type": 1 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.631 "dma_device_type": 2 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "dma_device_id": "system", 00:14:31.631 "dma_device_type": 1 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.631 "dma_device_type": 2 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "dma_device_id": "system", 00:14:31.631 "dma_device_type": 1 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.631 "dma_device_type": 2 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "dma_device_id": "system", 00:14:31.631 "dma_device_type": 1 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.631 "dma_device_type": 2 00:14:31.631 } 00:14:31.631 ], 00:14:31.631 "driver_specific": { 00:14:31.631 "raid": { 00:14:31.631 "uuid": "5e626ebc-0d65-4cf1-b180-4b63775e3cc3", 00:14:31.631 "strip_size_kb": 0, 00:14:31.631 "state": "online", 00:14:31.631 "raid_level": "raid1", 00:14:31.631 "superblock": true, 00:14:31.631 "num_base_bdevs": 4, 00:14:31.631 "num_base_bdevs_discovered": 4, 00:14:31.631 "num_base_bdevs_operational": 4, 00:14:31.631 "base_bdevs_list": [ 00:14:31.631 { 00:14:31.631 "name": "NewBaseBdev", 00:14:31.631 "uuid": "9eee39a0-27a1-47d8-be78-ad944093ceba", 00:14:31.631 "is_configured": true, 00:14:31.631 "data_offset": 2048, 00:14:31.631 "data_size": 63488 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "name": "BaseBdev2", 00:14:31.631 "uuid": "490958e1-12fe-4e03-a3cd-75e641d12759", 00:14:31.631 "is_configured": true, 00:14:31.631 "data_offset": 2048, 00:14:31.631 "data_size": 63488 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "name": "BaseBdev3", 00:14:31.631 "uuid": "87063b12-33a5-4e3a-b6eb-08926ac51204", 00:14:31.631 "is_configured": true, 00:14:31.631 "data_offset": 2048, 00:14:31.631 "data_size": 63488 00:14:31.631 }, 00:14:31.631 { 00:14:31.631 "name": "BaseBdev4", 00:14:31.631 "uuid": "2b87801e-a3c8-46c5-b765-2502dc7193ea", 00:14:31.631 "is_configured": true, 00:14:31.631 "data_offset": 2048, 00:14:31.631 "data_size": 63488 00:14:31.631 } 00:14:31.631 ] 00:14:31.631 } 00:14:31.631 } 00:14:31.631 }' 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:31.631 BaseBdev2 00:14:31.631 BaseBdev3 00:14:31.631 BaseBdev4' 00:14:31.631 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.891 [2024-11-04 14:40:30.993148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.891 [2024-11-04 14:40:30.993181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.891 [2024-11-04 14:40:30.993279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.891 [2024-11-04 14:40:30.993655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.891 [2024-11-04 14:40:30.993677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73999 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73999 ']' 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 73999 00:14:31.891 14:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:31.891 14:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:31.891 14:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73999 00:14:32.150 14:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:32.150 14:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:32.150 killing process with pid 73999 00:14:32.150 14:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73999' 00:14:32.150 14:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 73999 00:14:32.150 [2024-11-04 14:40:31.034190] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.150 14:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 73999 00:14:32.408 [2024-11-04 14:40:31.402275] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.344 ************************************ 00:14:33.344 END TEST raid_state_function_test_sb 00:14:33.344 ************************************ 00:14:33.344 14:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:33.344 00:14:33.344 real 0m13.364s 00:14:33.344 user 0m22.253s 00:14:33.344 sys 0m1.831s 00:14:33.344 14:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:33.344 14:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.608 14:40:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:14:33.608 14:40:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:33.608 14:40:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:33.608 14:40:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.608 ************************************ 00:14:33.608 START TEST raid_superblock_test 00:14:33.608 ************************************ 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:33.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74682 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74682 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74682 ']' 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:33.608 14:40:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.608 [2024-11-04 14:40:32.634640] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:14:33.608 [2024-11-04 14:40:32.635033] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74682 ] 00:14:33.867 [2024-11-04 14:40:32.822086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.867 [2024-11-04 14:40:32.979888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.126 [2024-11-04 14:40:33.207031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.126 [2024-11-04 14:40:33.207127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.694 malloc1 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.694 [2024-11-04 14:40:33.743344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:34.694 [2024-11-04 14:40:33.743601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.694 [2024-11-04 14:40:33.743719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:34.694 [2024-11-04 14:40:33.743959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.694 [2024-11-04 14:40:33.746919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.694 [2024-11-04 14:40:33.747131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:34.694 pt1 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.694 malloc2 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.694 [2024-11-04 14:40:33.800447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:34.694 [2024-11-04 14:40:33.800678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.694 [2024-11-04 14:40:33.800735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:34.694 [2024-11-04 14:40:33.800755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.694 [2024-11-04 14:40:33.803764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.694 [2024-11-04 14:40:33.803965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:34.694 pt2 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.694 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.953 malloc3 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.953 [2024-11-04 14:40:33.868737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:34.953 [2024-11-04 14:40:33.868807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.953 [2024-11-04 14:40:33.868840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:34.953 [2024-11-04 14:40:33.868856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.953 [2024-11-04 14:40:33.871725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.953 [2024-11-04 14:40:33.871906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:34.953 pt3 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.953 malloc4 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.953 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.953 [2024-11-04 14:40:33.927215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:34.953 [2024-11-04 14:40:33.927429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.954 [2024-11-04 14:40:33.927508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:34.954 [2024-11-04 14:40:33.927619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.954 [2024-11-04 14:40:33.930574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.954 [2024-11-04 14:40:33.930757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:34.954 pt4 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.954 [2024-11-04 14:40:33.939491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:34.954 [2024-11-04 14:40:33.942034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:34.954 [2024-11-04 14:40:33.942151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:34.954 [2024-11-04 14:40:33.942225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:34.954 [2024-11-04 14:40:33.942499] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:34.954 [2024-11-04 14:40:33.942523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:34.954 [2024-11-04 14:40:33.942927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:34.954 [2024-11-04 14:40:33.943223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:34.954 [2024-11-04 14:40:33.943248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:34.954 [2024-11-04 14:40:33.943506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.954 "name": "raid_bdev1", 00:14:34.954 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:34.954 "strip_size_kb": 0, 00:14:34.954 "state": "online", 00:14:34.954 "raid_level": "raid1", 00:14:34.954 "superblock": true, 00:14:34.954 "num_base_bdevs": 4, 00:14:34.954 "num_base_bdevs_discovered": 4, 00:14:34.954 "num_base_bdevs_operational": 4, 00:14:34.954 "base_bdevs_list": [ 00:14:34.954 { 00:14:34.954 "name": "pt1", 00:14:34.954 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:34.954 "is_configured": true, 00:14:34.954 "data_offset": 2048, 00:14:34.954 "data_size": 63488 00:14:34.954 }, 00:14:34.954 { 00:14:34.954 "name": "pt2", 00:14:34.954 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.954 "is_configured": true, 00:14:34.954 "data_offset": 2048, 00:14:34.954 "data_size": 63488 00:14:34.954 }, 00:14:34.954 { 00:14:34.954 "name": "pt3", 00:14:34.954 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:34.954 "is_configured": true, 00:14:34.954 "data_offset": 2048, 00:14:34.954 "data_size": 63488 00:14:34.954 }, 00:14:34.954 { 00:14:34.954 "name": "pt4", 00:14:34.954 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:34.954 "is_configured": true, 00:14:34.954 "data_offset": 2048, 00:14:34.954 "data_size": 63488 00:14:34.954 } 00:14:34.954 ] 00:14:34.954 }' 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.954 14:40:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:35.522 [2024-11-04 14:40:34.472055] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:35.522 "name": "raid_bdev1", 00:14:35.522 "aliases": [ 00:14:35.522 "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f" 00:14:35.522 ], 00:14:35.522 "product_name": "Raid Volume", 00:14:35.522 "block_size": 512, 00:14:35.522 "num_blocks": 63488, 00:14:35.522 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:35.522 "assigned_rate_limits": { 00:14:35.522 "rw_ios_per_sec": 0, 00:14:35.522 "rw_mbytes_per_sec": 0, 00:14:35.522 "r_mbytes_per_sec": 0, 00:14:35.522 "w_mbytes_per_sec": 0 00:14:35.522 }, 00:14:35.522 "claimed": false, 00:14:35.522 "zoned": false, 00:14:35.522 "supported_io_types": { 00:14:35.522 "read": true, 00:14:35.522 "write": true, 00:14:35.522 "unmap": false, 00:14:35.522 "flush": false, 00:14:35.522 "reset": true, 00:14:35.522 "nvme_admin": false, 00:14:35.522 "nvme_io": false, 00:14:35.522 "nvme_io_md": false, 00:14:35.522 "write_zeroes": true, 00:14:35.522 "zcopy": false, 00:14:35.522 "get_zone_info": false, 00:14:35.522 "zone_management": false, 00:14:35.522 "zone_append": false, 00:14:35.522 "compare": false, 00:14:35.522 "compare_and_write": false, 00:14:35.522 "abort": false, 00:14:35.522 "seek_hole": false, 00:14:35.522 "seek_data": false, 00:14:35.522 "copy": false, 00:14:35.522 "nvme_iov_md": false 00:14:35.522 }, 00:14:35.522 "memory_domains": [ 00:14:35.522 { 00:14:35.522 "dma_device_id": "system", 00:14:35.522 "dma_device_type": 1 00:14:35.522 }, 00:14:35.522 { 00:14:35.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.522 "dma_device_type": 2 00:14:35.522 }, 00:14:35.522 { 00:14:35.522 "dma_device_id": "system", 00:14:35.522 "dma_device_type": 1 00:14:35.522 }, 00:14:35.522 { 00:14:35.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.522 "dma_device_type": 2 00:14:35.522 }, 00:14:35.522 { 00:14:35.522 "dma_device_id": "system", 00:14:35.522 "dma_device_type": 1 00:14:35.522 }, 00:14:35.522 { 00:14:35.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.522 "dma_device_type": 2 00:14:35.522 }, 00:14:35.522 { 00:14:35.522 "dma_device_id": "system", 00:14:35.522 "dma_device_type": 1 00:14:35.522 }, 00:14:35.522 { 00:14:35.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.522 "dma_device_type": 2 00:14:35.522 } 00:14:35.522 ], 00:14:35.522 "driver_specific": { 00:14:35.522 "raid": { 00:14:35.522 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:35.522 "strip_size_kb": 0, 00:14:35.522 "state": "online", 00:14:35.522 "raid_level": "raid1", 00:14:35.522 "superblock": true, 00:14:35.522 "num_base_bdevs": 4, 00:14:35.522 "num_base_bdevs_discovered": 4, 00:14:35.522 "num_base_bdevs_operational": 4, 00:14:35.522 "base_bdevs_list": [ 00:14:35.522 { 00:14:35.522 "name": "pt1", 00:14:35.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.522 "is_configured": true, 00:14:35.522 "data_offset": 2048, 00:14:35.522 "data_size": 63488 00:14:35.522 }, 00:14:35.522 { 00:14:35.522 "name": "pt2", 00:14:35.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.522 "is_configured": true, 00:14:35.522 "data_offset": 2048, 00:14:35.522 "data_size": 63488 00:14:35.522 }, 00:14:35.522 { 00:14:35.522 "name": "pt3", 00:14:35.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.522 "is_configured": true, 00:14:35.522 "data_offset": 2048, 00:14:35.522 "data_size": 63488 00:14:35.522 }, 00:14:35.522 { 00:14:35.522 "name": "pt4", 00:14:35.522 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:35.522 "is_configured": true, 00:14:35.522 "data_offset": 2048, 00:14:35.522 "data_size": 63488 00:14:35.522 } 00:14:35.522 ] 00:14:35.522 } 00:14:35.522 } 00:14:35.522 }' 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:35.522 pt2 00:14:35.522 pt3 00:14:35.522 pt4' 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.522 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.782 [2024-11-04 14:40:34.832086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5f27591a-22d2-4f9c-aabf-ae3993b2bf5f 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5f27591a-22d2-4f9c-aabf-ae3993b2bf5f ']' 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.782 [2024-11-04 14:40:34.887702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.782 [2024-11-04 14:40:34.887737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.782 [2024-11-04 14:40:34.887839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.782 [2024-11-04 14:40:34.887950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.782 [2024-11-04 14:40:34.888003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.782 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.042 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:36.043 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:36.043 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.043 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.043 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.043 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:36.043 14:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:36.043 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.043 14:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.043 [2024-11-04 14:40:35.043744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:36.043 [2024-11-04 14:40:35.046393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:36.043 [2024-11-04 14:40:35.046640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:36.043 [2024-11-04 14:40:35.046716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:36.043 [2024-11-04 14:40:35.046796] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:36.043 [2024-11-04 14:40:35.046875] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:36.043 [2024-11-04 14:40:35.046925] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:36.043 [2024-11-04 14:40:35.046991] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:36.043 [2024-11-04 14:40:35.047017] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.043 [2024-11-04 14:40:35.047034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:36.043 request: 00:14:36.043 { 00:14:36.043 "name": "raid_bdev1", 00:14:36.043 "raid_level": "raid1", 00:14:36.043 "base_bdevs": [ 00:14:36.043 "malloc1", 00:14:36.043 "malloc2", 00:14:36.043 "malloc3", 00:14:36.043 "malloc4" 00:14:36.043 ], 00:14:36.043 "superblock": false, 00:14:36.043 "method": "bdev_raid_create", 00:14:36.043 "req_id": 1 00:14:36.043 } 00:14:36.043 Got JSON-RPC error response 00:14:36.043 response: 00:14:36.043 { 00:14:36.043 "code": -17, 00:14:36.043 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:36.043 } 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.043 [2024-11-04 14:40:35.111774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:36.043 [2024-11-04 14:40:35.111986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.043 [2024-11-04 14:40:35.112153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:36.043 [2024-11-04 14:40:35.112342] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.043 [2024-11-04 14:40:35.115480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.043 [2024-11-04 14:40:35.115656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:36.043 [2024-11-04 14:40:35.115862] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:36.043 [2024-11-04 14:40:35.115983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:36.043 pt1 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.043 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.302 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.302 "name": "raid_bdev1", 00:14:36.302 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:36.302 "strip_size_kb": 0, 00:14:36.302 "state": "configuring", 00:14:36.302 "raid_level": "raid1", 00:14:36.302 "superblock": true, 00:14:36.302 "num_base_bdevs": 4, 00:14:36.302 "num_base_bdevs_discovered": 1, 00:14:36.302 "num_base_bdevs_operational": 4, 00:14:36.302 "base_bdevs_list": [ 00:14:36.302 { 00:14:36.302 "name": "pt1", 00:14:36.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.302 "is_configured": true, 00:14:36.302 "data_offset": 2048, 00:14:36.302 "data_size": 63488 00:14:36.302 }, 00:14:36.302 { 00:14:36.302 "name": null, 00:14:36.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.302 "is_configured": false, 00:14:36.302 "data_offset": 2048, 00:14:36.302 "data_size": 63488 00:14:36.302 }, 00:14:36.302 { 00:14:36.302 "name": null, 00:14:36.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.302 "is_configured": false, 00:14:36.302 "data_offset": 2048, 00:14:36.302 "data_size": 63488 00:14:36.302 }, 00:14:36.302 { 00:14:36.302 "name": null, 00:14:36.302 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.302 "is_configured": false, 00:14:36.302 "data_offset": 2048, 00:14:36.302 "data_size": 63488 00:14:36.302 } 00:14:36.302 ] 00:14:36.302 }' 00:14:36.303 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.303 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.606 [2024-11-04 14:40:35.636188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:36.606 [2024-11-04 14:40:35.636271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.606 [2024-11-04 14:40:35.636302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:36.606 [2024-11-04 14:40:35.636321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.606 [2024-11-04 14:40:35.636912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.606 [2024-11-04 14:40:35.636967] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:36.606 [2024-11-04 14:40:35.637071] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:36.606 [2024-11-04 14:40:35.637121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:36.606 pt2 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.606 [2024-11-04 14:40:35.644160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.606 "name": "raid_bdev1", 00:14:36.606 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:36.606 "strip_size_kb": 0, 00:14:36.606 "state": "configuring", 00:14:36.606 "raid_level": "raid1", 00:14:36.606 "superblock": true, 00:14:36.606 "num_base_bdevs": 4, 00:14:36.606 "num_base_bdevs_discovered": 1, 00:14:36.606 "num_base_bdevs_operational": 4, 00:14:36.606 "base_bdevs_list": [ 00:14:36.606 { 00:14:36.606 "name": "pt1", 00:14:36.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.606 "is_configured": true, 00:14:36.606 "data_offset": 2048, 00:14:36.606 "data_size": 63488 00:14:36.606 }, 00:14:36.606 { 00:14:36.606 "name": null, 00:14:36.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.606 "is_configured": false, 00:14:36.606 "data_offset": 0, 00:14:36.606 "data_size": 63488 00:14:36.606 }, 00:14:36.606 { 00:14:36.606 "name": null, 00:14:36.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:36.606 "is_configured": false, 00:14:36.606 "data_offset": 2048, 00:14:36.606 "data_size": 63488 00:14:36.606 }, 00:14:36.606 { 00:14:36.606 "name": null, 00:14:36.606 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:36.606 "is_configured": false, 00:14:36.606 "data_offset": 2048, 00:14:36.606 "data_size": 63488 00:14:36.606 } 00:14:36.606 ] 00:14:36.606 }' 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.606 14:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.173 [2024-11-04 14:40:36.168371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:37.173 [2024-11-04 14:40:36.168472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.173 [2024-11-04 14:40:36.168512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:37.173 [2024-11-04 14:40:36.168530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.173 [2024-11-04 14:40:36.169130] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.173 [2024-11-04 14:40:36.169165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:37.173 [2024-11-04 14:40:36.169273] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:37.173 [2024-11-04 14:40:36.169318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:37.173 pt2 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.173 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.173 [2024-11-04 14:40:36.176303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:37.173 [2024-11-04 14:40:36.176370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.173 [2024-11-04 14:40:36.176397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:37.174 [2024-11-04 14:40:36.176411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.174 [2024-11-04 14:40:36.176864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.174 [2024-11-04 14:40:36.176897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:37.174 [2024-11-04 14:40:36.176995] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:37.174 [2024-11-04 14:40:36.177031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:37.174 pt3 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.174 [2024-11-04 14:40:36.184269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:37.174 [2024-11-04 14:40:36.184390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.174 [2024-11-04 14:40:36.184416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:37.174 [2024-11-04 14:40:36.184429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.174 [2024-11-04 14:40:36.184925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.174 [2024-11-04 14:40:36.184974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:37.174 [2024-11-04 14:40:36.185060] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:37.174 [2024-11-04 14:40:36.185088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:37.174 [2024-11-04 14:40:36.185286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:37.174 [2024-11-04 14:40:36.185309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.174 [2024-11-04 14:40:36.185633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:37.174 [2024-11-04 14:40:36.185838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:37.174 [2024-11-04 14:40:36.185867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:37.174 [2024-11-04 14:40:36.186069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.174 pt4 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.174 "name": "raid_bdev1", 00:14:37.174 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:37.174 "strip_size_kb": 0, 00:14:37.174 "state": "online", 00:14:37.174 "raid_level": "raid1", 00:14:37.174 "superblock": true, 00:14:37.174 "num_base_bdevs": 4, 00:14:37.174 "num_base_bdevs_discovered": 4, 00:14:37.174 "num_base_bdevs_operational": 4, 00:14:37.174 "base_bdevs_list": [ 00:14:37.174 { 00:14:37.174 "name": "pt1", 00:14:37.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.174 "is_configured": true, 00:14:37.174 "data_offset": 2048, 00:14:37.174 "data_size": 63488 00:14:37.174 }, 00:14:37.174 { 00:14:37.174 "name": "pt2", 00:14:37.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.174 "is_configured": true, 00:14:37.174 "data_offset": 2048, 00:14:37.174 "data_size": 63488 00:14:37.174 }, 00:14:37.174 { 00:14:37.174 "name": "pt3", 00:14:37.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.174 "is_configured": true, 00:14:37.174 "data_offset": 2048, 00:14:37.174 "data_size": 63488 00:14:37.174 }, 00:14:37.174 { 00:14:37.174 "name": "pt4", 00:14:37.174 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:37.174 "is_configured": true, 00:14:37.174 "data_offset": 2048, 00:14:37.174 "data_size": 63488 00:14:37.174 } 00:14:37.174 ] 00:14:37.174 }' 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.174 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.741 [2024-11-04 14:40:36.716924] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.741 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.741 "name": "raid_bdev1", 00:14:37.741 "aliases": [ 00:14:37.741 "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f" 00:14:37.742 ], 00:14:37.742 "product_name": "Raid Volume", 00:14:37.742 "block_size": 512, 00:14:37.742 "num_blocks": 63488, 00:14:37.742 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:37.742 "assigned_rate_limits": { 00:14:37.742 "rw_ios_per_sec": 0, 00:14:37.742 "rw_mbytes_per_sec": 0, 00:14:37.742 "r_mbytes_per_sec": 0, 00:14:37.742 "w_mbytes_per_sec": 0 00:14:37.742 }, 00:14:37.742 "claimed": false, 00:14:37.742 "zoned": false, 00:14:37.742 "supported_io_types": { 00:14:37.742 "read": true, 00:14:37.742 "write": true, 00:14:37.742 "unmap": false, 00:14:37.742 "flush": false, 00:14:37.742 "reset": true, 00:14:37.742 "nvme_admin": false, 00:14:37.742 "nvme_io": false, 00:14:37.742 "nvme_io_md": false, 00:14:37.742 "write_zeroes": true, 00:14:37.742 "zcopy": false, 00:14:37.742 "get_zone_info": false, 00:14:37.742 "zone_management": false, 00:14:37.742 "zone_append": false, 00:14:37.742 "compare": false, 00:14:37.742 "compare_and_write": false, 00:14:37.742 "abort": false, 00:14:37.742 "seek_hole": false, 00:14:37.742 "seek_data": false, 00:14:37.742 "copy": false, 00:14:37.742 "nvme_iov_md": false 00:14:37.742 }, 00:14:37.742 "memory_domains": [ 00:14:37.742 { 00:14:37.742 "dma_device_id": "system", 00:14:37.742 "dma_device_type": 1 00:14:37.742 }, 00:14:37.742 { 00:14:37.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.742 "dma_device_type": 2 00:14:37.742 }, 00:14:37.742 { 00:14:37.742 "dma_device_id": "system", 00:14:37.742 "dma_device_type": 1 00:14:37.742 }, 00:14:37.742 { 00:14:37.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.742 "dma_device_type": 2 00:14:37.742 }, 00:14:37.742 { 00:14:37.742 "dma_device_id": "system", 00:14:37.742 "dma_device_type": 1 00:14:37.742 }, 00:14:37.742 { 00:14:37.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.742 "dma_device_type": 2 00:14:37.742 }, 00:14:37.742 { 00:14:37.742 "dma_device_id": "system", 00:14:37.742 "dma_device_type": 1 00:14:37.742 }, 00:14:37.742 { 00:14:37.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.742 "dma_device_type": 2 00:14:37.742 } 00:14:37.742 ], 00:14:37.742 "driver_specific": { 00:14:37.742 "raid": { 00:14:37.742 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:37.742 "strip_size_kb": 0, 00:14:37.742 "state": "online", 00:14:37.742 "raid_level": "raid1", 00:14:37.742 "superblock": true, 00:14:37.742 "num_base_bdevs": 4, 00:14:37.742 "num_base_bdevs_discovered": 4, 00:14:37.742 "num_base_bdevs_operational": 4, 00:14:37.742 "base_bdevs_list": [ 00:14:37.742 { 00:14:37.742 "name": "pt1", 00:14:37.742 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:37.742 "is_configured": true, 00:14:37.742 "data_offset": 2048, 00:14:37.742 "data_size": 63488 00:14:37.742 }, 00:14:37.742 { 00:14:37.742 "name": "pt2", 00:14:37.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:37.742 "is_configured": true, 00:14:37.742 "data_offset": 2048, 00:14:37.742 "data_size": 63488 00:14:37.742 }, 00:14:37.742 { 00:14:37.742 "name": "pt3", 00:14:37.742 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.742 "is_configured": true, 00:14:37.742 "data_offset": 2048, 00:14:37.742 "data_size": 63488 00:14:37.742 }, 00:14:37.742 { 00:14:37.742 "name": "pt4", 00:14:37.742 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:37.742 "is_configured": true, 00:14:37.742 "data_offset": 2048, 00:14:37.742 "data_size": 63488 00:14:37.742 } 00:14:37.742 ] 00:14:37.742 } 00:14:37.742 } 00:14:37.742 }' 00:14:37.742 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.742 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:37.742 pt2 00:14:37.742 pt3 00:14:37.742 pt4' 00:14:37.742 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.002 14:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:38.002 [2024-11-04 14:40:37.088942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.002 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5f27591a-22d2-4f9c-aabf-ae3993b2bf5f '!=' 5f27591a-22d2-4f9c-aabf-ae3993b2bf5f ']' 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.261 [2024-11-04 14:40:37.136688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.261 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.262 "name": "raid_bdev1", 00:14:38.262 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:38.262 "strip_size_kb": 0, 00:14:38.262 "state": "online", 00:14:38.262 "raid_level": "raid1", 00:14:38.262 "superblock": true, 00:14:38.262 "num_base_bdevs": 4, 00:14:38.262 "num_base_bdevs_discovered": 3, 00:14:38.262 "num_base_bdevs_operational": 3, 00:14:38.262 "base_bdevs_list": [ 00:14:38.262 { 00:14:38.262 "name": null, 00:14:38.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.262 "is_configured": false, 00:14:38.262 "data_offset": 0, 00:14:38.262 "data_size": 63488 00:14:38.262 }, 00:14:38.262 { 00:14:38.262 "name": "pt2", 00:14:38.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.262 "is_configured": true, 00:14:38.262 "data_offset": 2048, 00:14:38.262 "data_size": 63488 00:14:38.262 }, 00:14:38.262 { 00:14:38.262 "name": "pt3", 00:14:38.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.262 "is_configured": true, 00:14:38.262 "data_offset": 2048, 00:14:38.262 "data_size": 63488 00:14:38.262 }, 00:14:38.262 { 00:14:38.262 "name": "pt4", 00:14:38.262 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.262 "is_configured": true, 00:14:38.262 "data_offset": 2048, 00:14:38.262 "data_size": 63488 00:14:38.262 } 00:14:38.262 ] 00:14:38.262 }' 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.262 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 [2024-11-04 14:40:37.680762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.830 [2024-11-04 14:40:37.680801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.830 [2024-11-04 14:40:37.680892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.830 [2024-11-04 14:40:37.681006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.830 [2024-11-04 14:40:37.681043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 [2024-11-04 14:40:37.780770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:38.830 [2024-11-04 14:40:37.781059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.830 [2024-11-04 14:40:37.781105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:38.830 [2024-11-04 14:40:37.781121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.830 [2024-11-04 14:40:37.784113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.830 [2024-11-04 14:40:37.784311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:38.830 [2024-11-04 14:40:37.784451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:38.830 [2024-11-04 14:40:37.784512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:38.830 pt2 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.830 "name": "raid_bdev1", 00:14:38.830 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:38.830 "strip_size_kb": 0, 00:14:38.830 "state": "configuring", 00:14:38.830 "raid_level": "raid1", 00:14:38.830 "superblock": true, 00:14:38.830 "num_base_bdevs": 4, 00:14:38.830 "num_base_bdevs_discovered": 1, 00:14:38.830 "num_base_bdevs_operational": 3, 00:14:38.830 "base_bdevs_list": [ 00:14:38.830 { 00:14:38.830 "name": null, 00:14:38.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.830 "is_configured": false, 00:14:38.830 "data_offset": 2048, 00:14:38.830 "data_size": 63488 00:14:38.830 }, 00:14:38.830 { 00:14:38.830 "name": "pt2", 00:14:38.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:38.830 "is_configured": true, 00:14:38.830 "data_offset": 2048, 00:14:38.830 "data_size": 63488 00:14:38.830 }, 00:14:38.830 { 00:14:38.830 "name": null, 00:14:38.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:38.830 "is_configured": false, 00:14:38.830 "data_offset": 2048, 00:14:38.830 "data_size": 63488 00:14:38.830 }, 00:14:38.830 { 00:14:38.830 "name": null, 00:14:38.830 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:38.830 "is_configured": false, 00:14:38.830 "data_offset": 2048, 00:14:38.830 "data_size": 63488 00:14:38.830 } 00:14:38.830 ] 00:14:38.830 }' 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.830 14:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.442 [2024-11-04 14:40:38.316988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:39.442 [2024-11-04 14:40:38.317226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.442 [2024-11-04 14:40:38.317274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:39.442 [2024-11-04 14:40:38.317290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.442 [2024-11-04 14:40:38.317882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.442 [2024-11-04 14:40:38.317908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:39.442 [2024-11-04 14:40:38.318064] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:39.442 [2024-11-04 14:40:38.318098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:39.442 pt3 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.442 "name": "raid_bdev1", 00:14:39.442 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:39.442 "strip_size_kb": 0, 00:14:39.442 "state": "configuring", 00:14:39.442 "raid_level": "raid1", 00:14:39.442 "superblock": true, 00:14:39.442 "num_base_bdevs": 4, 00:14:39.442 "num_base_bdevs_discovered": 2, 00:14:39.442 "num_base_bdevs_operational": 3, 00:14:39.442 "base_bdevs_list": [ 00:14:39.442 { 00:14:39.442 "name": null, 00:14:39.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.442 "is_configured": false, 00:14:39.442 "data_offset": 2048, 00:14:39.442 "data_size": 63488 00:14:39.442 }, 00:14:39.442 { 00:14:39.442 "name": "pt2", 00:14:39.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.442 "is_configured": true, 00:14:39.442 "data_offset": 2048, 00:14:39.442 "data_size": 63488 00:14:39.442 }, 00:14:39.442 { 00:14:39.442 "name": "pt3", 00:14:39.442 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.442 "is_configured": true, 00:14:39.442 "data_offset": 2048, 00:14:39.442 "data_size": 63488 00:14:39.442 }, 00:14:39.442 { 00:14:39.442 "name": null, 00:14:39.442 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:39.442 "is_configured": false, 00:14:39.442 "data_offset": 2048, 00:14:39.442 "data_size": 63488 00:14:39.442 } 00:14:39.442 ] 00:14:39.442 }' 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.442 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.701 [2024-11-04 14:40:38.813163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:39.701 [2024-11-04 14:40:38.813241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.701 [2024-11-04 14:40:38.813275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:39.701 [2024-11-04 14:40:38.813291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.701 [2024-11-04 14:40:38.813898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.701 [2024-11-04 14:40:38.813924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:39.701 [2024-11-04 14:40:38.814060] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:39.701 [2024-11-04 14:40:38.814103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:39.701 [2024-11-04 14:40:38.814276] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:39.701 [2024-11-04 14:40:38.814293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:39.701 [2024-11-04 14:40:38.814603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:39.701 [2024-11-04 14:40:38.814800] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:39.701 [2024-11-04 14:40:38.814822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:39.701 [2024-11-04 14:40:38.815012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.701 pt4 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.701 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.959 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.959 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.959 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.959 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.959 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.959 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.959 "name": "raid_bdev1", 00:14:39.959 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:39.959 "strip_size_kb": 0, 00:14:39.959 "state": "online", 00:14:39.959 "raid_level": "raid1", 00:14:39.959 "superblock": true, 00:14:39.959 "num_base_bdevs": 4, 00:14:39.959 "num_base_bdevs_discovered": 3, 00:14:39.959 "num_base_bdevs_operational": 3, 00:14:39.959 "base_bdevs_list": [ 00:14:39.959 { 00:14:39.959 "name": null, 00:14:39.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.959 "is_configured": false, 00:14:39.959 "data_offset": 2048, 00:14:39.959 "data_size": 63488 00:14:39.959 }, 00:14:39.959 { 00:14:39.959 "name": "pt2", 00:14:39.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:39.959 "is_configured": true, 00:14:39.959 "data_offset": 2048, 00:14:39.959 "data_size": 63488 00:14:39.959 }, 00:14:39.959 { 00:14:39.959 "name": "pt3", 00:14:39.959 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:39.959 "is_configured": true, 00:14:39.959 "data_offset": 2048, 00:14:39.959 "data_size": 63488 00:14:39.959 }, 00:14:39.959 { 00:14:39.959 "name": "pt4", 00:14:39.959 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:39.959 "is_configured": true, 00:14:39.959 "data_offset": 2048, 00:14:39.959 "data_size": 63488 00:14:39.959 } 00:14:39.959 ] 00:14:39.959 }' 00:14:39.959 14:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.959 14:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.218 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:40.218 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.218 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.218 [2024-11-04 14:40:39.321295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.218 [2024-11-04 14:40:39.321329] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.218 [2024-11-04 14:40:39.321451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.218 [2024-11-04 14:40:39.321554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.218 [2024-11-04 14:40:39.321573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:40.218 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.218 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:40.218 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.218 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.218 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.218 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.477 [2024-11-04 14:40:39.397283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:40.477 [2024-11-04 14:40:39.397364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.477 [2024-11-04 14:40:39.397393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:14:40.477 [2024-11-04 14:40:39.397410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.477 [2024-11-04 14:40:39.400355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.477 [2024-11-04 14:40:39.400408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:40.477 [2024-11-04 14:40:39.400517] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:40.477 [2024-11-04 14:40:39.400582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:40.477 [2024-11-04 14:40:39.400747] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:40.477 [2024-11-04 14:40:39.400771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.477 [2024-11-04 14:40:39.400792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:40.477 [2024-11-04 14:40:39.400875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.477 [2024-11-04 14:40:39.401051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.477 pt1 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.477 "name": "raid_bdev1", 00:14:40.477 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:40.477 "strip_size_kb": 0, 00:14:40.477 "state": "configuring", 00:14:40.477 "raid_level": "raid1", 00:14:40.477 "superblock": true, 00:14:40.477 "num_base_bdevs": 4, 00:14:40.477 "num_base_bdevs_discovered": 2, 00:14:40.477 "num_base_bdevs_operational": 3, 00:14:40.477 "base_bdevs_list": [ 00:14:40.477 { 00:14:40.477 "name": null, 00:14:40.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.477 "is_configured": false, 00:14:40.477 "data_offset": 2048, 00:14:40.477 "data_size": 63488 00:14:40.477 }, 00:14:40.477 { 00:14:40.477 "name": "pt2", 00:14:40.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.477 "is_configured": true, 00:14:40.477 "data_offset": 2048, 00:14:40.477 "data_size": 63488 00:14:40.477 }, 00:14:40.477 { 00:14:40.477 "name": "pt3", 00:14:40.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.477 "is_configured": true, 00:14:40.477 "data_offset": 2048, 00:14:40.477 "data_size": 63488 00:14:40.477 }, 00:14:40.477 { 00:14:40.477 "name": null, 00:14:40.477 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:40.477 "is_configured": false, 00:14:40.477 "data_offset": 2048, 00:14:40.477 "data_size": 63488 00:14:40.477 } 00:14:40.477 ] 00:14:40.477 }' 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.477 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.050 [2024-11-04 14:40:39.969473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:41.050 [2024-11-04 14:40:39.969551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.050 [2024-11-04 14:40:39.969585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:41.050 [2024-11-04 14:40:39.969601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.050 [2024-11-04 14:40:39.970167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.050 [2024-11-04 14:40:39.970210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:41.050 [2024-11-04 14:40:39.970317] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:41.050 [2024-11-04 14:40:39.970359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:41.050 [2024-11-04 14:40:39.970529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:41.050 [2024-11-04 14:40:39.970545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:41.050 [2024-11-04 14:40:39.970866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:41.050 [2024-11-04 14:40:39.971076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:41.050 [2024-11-04 14:40:39.971099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:41.050 [2024-11-04 14:40:39.971273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.050 pt4 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.050 14:40:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.050 14:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.050 "name": "raid_bdev1", 00:14:41.050 "uuid": "5f27591a-22d2-4f9c-aabf-ae3993b2bf5f", 00:14:41.050 "strip_size_kb": 0, 00:14:41.050 "state": "online", 00:14:41.050 "raid_level": "raid1", 00:14:41.050 "superblock": true, 00:14:41.050 "num_base_bdevs": 4, 00:14:41.050 "num_base_bdevs_discovered": 3, 00:14:41.050 "num_base_bdevs_operational": 3, 00:14:41.050 "base_bdevs_list": [ 00:14:41.050 { 00:14:41.050 "name": null, 00:14:41.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.050 "is_configured": false, 00:14:41.050 "data_offset": 2048, 00:14:41.050 "data_size": 63488 00:14:41.050 }, 00:14:41.050 { 00:14:41.050 "name": "pt2", 00:14:41.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.050 "is_configured": true, 00:14:41.050 "data_offset": 2048, 00:14:41.050 "data_size": 63488 00:14:41.050 }, 00:14:41.050 { 00:14:41.050 "name": "pt3", 00:14:41.050 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.050 "is_configured": true, 00:14:41.050 "data_offset": 2048, 00:14:41.050 "data_size": 63488 00:14:41.050 }, 00:14:41.050 { 00:14:41.050 "name": "pt4", 00:14:41.050 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:41.050 "is_configured": true, 00:14:41.050 "data_offset": 2048, 00:14:41.050 "data_size": 63488 00:14:41.050 } 00:14:41.050 ] 00:14:41.050 }' 00:14:41.050 14:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.050 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.618 [2024-11-04 14:40:40.542026] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5f27591a-22d2-4f9c-aabf-ae3993b2bf5f '!=' 5f27591a-22d2-4f9c-aabf-ae3993b2bf5f ']' 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74682 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74682 ']' 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74682 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74682 00:14:41.618 killing process with pid 74682 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74682' 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74682 00:14:41.618 [2024-11-04 14:40:40.613273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.618 14:40:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74682 00:14:41.618 [2024-11-04 14:40:40.613388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.618 [2024-11-04 14:40:40.613493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.618 [2024-11-04 14:40:40.613512] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:41.877 [2024-11-04 14:40:40.968252] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.254 ************************************ 00:14:43.254 END TEST raid_superblock_test 00:14:43.254 ************************************ 00:14:43.254 14:40:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:43.254 00:14:43.254 real 0m9.491s 00:14:43.254 user 0m15.581s 00:14:43.254 sys 0m1.386s 00:14:43.254 14:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:43.254 14:40:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.254 14:40:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:14:43.254 14:40:42 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:43.254 14:40:42 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:43.254 14:40:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.255 ************************************ 00:14:43.255 START TEST raid_read_error_test 00:14:43.255 ************************************ 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.g6ceWAxb0s 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75180 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75180 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75180 ']' 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:43.255 14:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.255 [2024-11-04 14:40:42.172039] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:14:43.255 [2024-11-04 14:40:42.172405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75180 ] 00:14:43.255 [2024-11-04 14:40:42.346115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.513 [2024-11-04 14:40:42.474043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.771 [2024-11-04 14:40:42.682468] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.771 [2024-11-04 14:40:42.682510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.029 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:44.029 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:44.029 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.029 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.029 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.029 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.288 BaseBdev1_malloc 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.288 true 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.288 [2024-11-04 14:40:43.165945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:44.288 [2024-11-04 14:40:43.166036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.288 [2024-11-04 14:40:43.166066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:44.288 [2024-11-04 14:40:43.166083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.288 [2024-11-04 14:40:43.168886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.288 [2024-11-04 14:40:43.168952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.288 BaseBdev1 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.288 BaseBdev2_malloc 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:44.288 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 true 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 [2024-11-04 14:40:43.222680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:44.289 [2024-11-04 14:40:43.222762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.289 [2024-11-04 14:40:43.222788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:44.289 [2024-11-04 14:40:43.222805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.289 [2024-11-04 14:40:43.225680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.289 [2024-11-04 14:40:43.225878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.289 BaseBdev2 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 BaseBdev3_malloc 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 true 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 [2024-11-04 14:40:43.297536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:44.289 [2024-11-04 14:40:43.297609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.289 [2024-11-04 14:40:43.297641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:44.289 [2024-11-04 14:40:43.297658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.289 [2024-11-04 14:40:43.300495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.289 [2024-11-04 14:40:43.300547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:44.289 BaseBdev3 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 BaseBdev4_malloc 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 true 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 [2024-11-04 14:40:43.358316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:44.289 [2024-11-04 14:40:43.358395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.289 [2024-11-04 14:40:43.358427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:44.289 [2024-11-04 14:40:43.358444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.289 [2024-11-04 14:40:43.361231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.289 [2024-11-04 14:40:43.361315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:44.289 BaseBdev4 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 [2024-11-04 14:40:43.366409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.289 [2024-11-04 14:40:43.368844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.289 [2024-11-04 14:40:43.369120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:44.289 [2024-11-04 14:40:43.369246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:44.289 [2024-11-04 14:40:43.369596] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:44.289 [2024-11-04 14:40:43.369621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:44.289 [2024-11-04 14:40:43.369948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:44.289 [2024-11-04 14:40:43.370212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:44.289 [2024-11-04 14:40:43.370229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:44.289 [2024-11-04 14:40:43.370503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.548 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.548 "name": "raid_bdev1", 00:14:44.548 "uuid": "89982a2d-2494-47b0-9151-60b4240fa356", 00:14:44.548 "strip_size_kb": 0, 00:14:44.548 "state": "online", 00:14:44.548 "raid_level": "raid1", 00:14:44.548 "superblock": true, 00:14:44.548 "num_base_bdevs": 4, 00:14:44.548 "num_base_bdevs_discovered": 4, 00:14:44.548 "num_base_bdevs_operational": 4, 00:14:44.548 "base_bdevs_list": [ 00:14:44.548 { 00:14:44.548 "name": "BaseBdev1", 00:14:44.548 "uuid": "a734ca5c-4205-5242-b924-9395022873de", 00:14:44.548 "is_configured": true, 00:14:44.548 "data_offset": 2048, 00:14:44.548 "data_size": 63488 00:14:44.548 }, 00:14:44.548 { 00:14:44.548 "name": "BaseBdev2", 00:14:44.548 "uuid": "3f94b42e-b1e3-515d-b631-c4596fbd573f", 00:14:44.548 "is_configured": true, 00:14:44.548 "data_offset": 2048, 00:14:44.548 "data_size": 63488 00:14:44.548 }, 00:14:44.548 { 00:14:44.548 "name": "BaseBdev3", 00:14:44.548 "uuid": "de8c1d15-1ff8-58c9-9f0d-1cb1f48f0291", 00:14:44.548 "is_configured": true, 00:14:44.548 "data_offset": 2048, 00:14:44.548 "data_size": 63488 00:14:44.548 }, 00:14:44.548 { 00:14:44.548 "name": "BaseBdev4", 00:14:44.548 "uuid": "8485f149-b0fd-500a-a68e-5fe77d7a3a77", 00:14:44.548 "is_configured": true, 00:14:44.548 "data_offset": 2048, 00:14:44.548 "data_size": 63488 00:14:44.548 } 00:14:44.548 ] 00:14:44.548 }' 00:14:44.548 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.548 14:40:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.807 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:44.807 14:40:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:45.066 [2024-11-04 14:40:43.980266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.002 "name": "raid_bdev1", 00:14:46.002 "uuid": "89982a2d-2494-47b0-9151-60b4240fa356", 00:14:46.002 "strip_size_kb": 0, 00:14:46.002 "state": "online", 00:14:46.002 "raid_level": "raid1", 00:14:46.002 "superblock": true, 00:14:46.002 "num_base_bdevs": 4, 00:14:46.002 "num_base_bdevs_discovered": 4, 00:14:46.002 "num_base_bdevs_operational": 4, 00:14:46.002 "base_bdevs_list": [ 00:14:46.002 { 00:14:46.002 "name": "BaseBdev1", 00:14:46.002 "uuid": "a734ca5c-4205-5242-b924-9395022873de", 00:14:46.002 "is_configured": true, 00:14:46.002 "data_offset": 2048, 00:14:46.002 "data_size": 63488 00:14:46.002 }, 00:14:46.002 { 00:14:46.002 "name": "BaseBdev2", 00:14:46.002 "uuid": "3f94b42e-b1e3-515d-b631-c4596fbd573f", 00:14:46.002 "is_configured": true, 00:14:46.002 "data_offset": 2048, 00:14:46.002 "data_size": 63488 00:14:46.002 }, 00:14:46.002 { 00:14:46.002 "name": "BaseBdev3", 00:14:46.002 "uuid": "de8c1d15-1ff8-58c9-9f0d-1cb1f48f0291", 00:14:46.002 "is_configured": true, 00:14:46.002 "data_offset": 2048, 00:14:46.002 "data_size": 63488 00:14:46.002 }, 00:14:46.002 { 00:14:46.002 "name": "BaseBdev4", 00:14:46.002 "uuid": "8485f149-b0fd-500a-a68e-5fe77d7a3a77", 00:14:46.002 "is_configured": true, 00:14:46.002 "data_offset": 2048, 00:14:46.002 "data_size": 63488 00:14:46.002 } 00:14:46.002 ] 00:14:46.002 }' 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.002 14:40:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.568 [2024-11-04 14:40:45.431397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:46.568 [2024-11-04 14:40:45.431645] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:46.568 { 00:14:46.568 "results": [ 00:14:46.568 { 00:14:46.568 "job": "raid_bdev1", 00:14:46.568 "core_mask": "0x1", 00:14:46.568 "workload": "randrw", 00:14:46.568 "percentage": 50, 00:14:46.568 "status": "finished", 00:14:46.568 "queue_depth": 1, 00:14:46.568 "io_size": 131072, 00:14:46.568 "runtime": 1.448781, 00:14:46.568 "iops": 7257.825716930302, 00:14:46.568 "mibps": 907.2282146162878, 00:14:46.568 "io_failed": 0, 00:14:46.568 "io_timeout": 0, 00:14:46.568 "avg_latency_us": 133.3027844205248, 00:14:46.568 "min_latency_us": 40.96, 00:14:46.568 "max_latency_us": 2010.7636363636364 00:14:46.568 } 00:14:46.568 ], 00:14:46.568 "core_count": 1 00:14:46.568 } 00:14:46.568 [2024-11-04 14:40:45.435265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.568 [2024-11-04 14:40:45.435445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.568 [2024-11-04 14:40:45.435599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.568 [2024-11-04 14:40:45.435638] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75180 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75180 ']' 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75180 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75180 00:14:46.568 killing process with pid 75180 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:46.568 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75180' 00:14:46.569 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75180 00:14:46.569 [2024-11-04 14:40:45.481693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.569 14:40:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75180 00:14:46.827 [2024-11-04 14:40:45.792210] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.202 14:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.g6ceWAxb0s 00:14:48.202 14:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:48.202 14:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:48.202 14:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:48.202 14:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:48.202 14:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:48.202 14:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:48.202 14:40:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:48.202 00:14:48.202 real 0m4.867s 00:14:48.202 user 0m5.929s 00:14:48.202 sys 0m0.618s 00:14:48.202 14:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:48.202 ************************************ 00:14:48.202 END TEST raid_read_error_test 00:14:48.202 ************************************ 00:14:48.202 14:40:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.202 14:40:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:14:48.202 14:40:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:48.202 14:40:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:48.202 14:40:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.202 ************************************ 00:14:48.202 START TEST raid_write_error_test 00:14:48.202 ************************************ 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:48.202 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:48.203 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:48.203 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:48.203 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:48.203 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:48.203 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:14:48.203 14:40:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gXqXOhpx1e 00:14:48.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75326 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75326 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75326 ']' 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:48.203 14:40:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.203 [2024-11-04 14:40:47.115859] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:14:48.203 [2024-11-04 14:40:47.116085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75326 ] 00:14:48.203 [2024-11-04 14:40:47.298226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.461 [2024-11-04 14:40:47.433485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.719 [2024-11-04 14:40:47.646263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.719 [2024-11-04 14:40:47.646556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.286 BaseBdev1_malloc 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.286 true 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.286 [2024-11-04 14:40:48.196911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:49.286 [2024-11-04 14:40:48.197184] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.286 [2024-11-04 14:40:48.197230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:49.286 [2024-11-04 14:40:48.197251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.286 [2024-11-04 14:40:48.200129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.286 [2024-11-04 14:40:48.200183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:49.286 BaseBdev1 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.286 BaseBdev2_malloc 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.286 true 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.286 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.286 [2024-11-04 14:40:48.262453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:49.286 [2024-11-04 14:40:48.262539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.286 [2024-11-04 14:40:48.262565] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:49.286 [2024-11-04 14:40:48.262582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.286 [2024-11-04 14:40:48.265423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.286 [2024-11-04 14:40:48.265505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:49.287 BaseBdev2 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 BaseBdev3_malloc 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 true 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 [2024-11-04 14:40:48.343049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:49.287 [2024-11-04 14:40:48.343136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.287 [2024-11-04 14:40:48.343180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:49.287 [2024-11-04 14:40:48.343198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.287 [2024-11-04 14:40:48.346260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.287 [2024-11-04 14:40:48.346521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:49.287 BaseBdev3 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 BaseBdev4_malloc 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.287 true 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.287 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:49.545 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.546 [2024-11-04 14:40:48.414072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:49.546 [2024-11-04 14:40:48.414143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.546 [2024-11-04 14:40:48.414171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:49.546 [2024-11-04 14:40:48.414189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.546 [2024-11-04 14:40:48.417069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.546 [2024-11-04 14:40:48.417123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:49.546 BaseBdev4 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.546 [2024-11-04 14:40:48.426145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.546 [2024-11-04 14:40:48.428813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.546 [2024-11-04 14:40:48.428922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.546 [2024-11-04 14:40:48.429053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:49.546 [2024-11-04 14:40:48.429365] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:49.546 [2024-11-04 14:40:48.429389] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:49.546 [2024-11-04 14:40:48.429739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:49.546 [2024-11-04 14:40:48.430131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:49.546 [2024-11-04 14:40:48.430189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:49.546 [2024-11-04 14:40:48.430569] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.546 "name": "raid_bdev1", 00:14:49.546 "uuid": "ebc7c25e-a527-4014-b7a8-c330983cc2fe", 00:14:49.546 "strip_size_kb": 0, 00:14:49.546 "state": "online", 00:14:49.546 "raid_level": "raid1", 00:14:49.546 "superblock": true, 00:14:49.546 "num_base_bdevs": 4, 00:14:49.546 "num_base_bdevs_discovered": 4, 00:14:49.546 "num_base_bdevs_operational": 4, 00:14:49.546 "base_bdevs_list": [ 00:14:49.546 { 00:14:49.546 "name": "BaseBdev1", 00:14:49.546 "uuid": "9a445de3-8f5b-53c6-badb-b76031a5d2c2", 00:14:49.546 "is_configured": true, 00:14:49.546 "data_offset": 2048, 00:14:49.546 "data_size": 63488 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "name": "BaseBdev2", 00:14:49.546 "uuid": "088d51e7-ba13-5a26-9691-39c33d3e5501", 00:14:49.546 "is_configured": true, 00:14:49.546 "data_offset": 2048, 00:14:49.546 "data_size": 63488 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "name": "BaseBdev3", 00:14:49.546 "uuid": "5e5bacf3-3295-50ae-a541-ebffe84865cd", 00:14:49.546 "is_configured": true, 00:14:49.546 "data_offset": 2048, 00:14:49.546 "data_size": 63488 00:14:49.546 }, 00:14:49.546 { 00:14:49.546 "name": "BaseBdev4", 00:14:49.546 "uuid": "815d0abd-f5e3-5831-8084-666f6f6d1ea1", 00:14:49.546 "is_configured": true, 00:14:49.546 "data_offset": 2048, 00:14:49.546 "data_size": 63488 00:14:49.546 } 00:14:49.546 ] 00:14:49.546 }' 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.546 14:40:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.827 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:49.827 14:40:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:50.084 [2024-11-04 14:40:49.072183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:51.017 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:51.017 14:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.017 14:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.017 [2024-11-04 14:40:49.954689] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:51.017 [2024-11-04 14:40:49.954972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.017 [2024-11-04 14:40:49.955271] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:14:51.017 14:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.017 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:51.017 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.018 14:40:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.018 14:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.018 "name": "raid_bdev1", 00:14:51.018 "uuid": "ebc7c25e-a527-4014-b7a8-c330983cc2fe", 00:14:51.018 "strip_size_kb": 0, 00:14:51.018 "state": "online", 00:14:51.018 "raid_level": "raid1", 00:14:51.018 "superblock": true, 00:14:51.018 "num_base_bdevs": 4, 00:14:51.018 "num_base_bdevs_discovered": 3, 00:14:51.018 "num_base_bdevs_operational": 3, 00:14:51.018 "base_bdevs_list": [ 00:14:51.018 { 00:14:51.018 "name": null, 00:14:51.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.018 "is_configured": false, 00:14:51.018 "data_offset": 0, 00:14:51.018 "data_size": 63488 00:14:51.018 }, 00:14:51.018 { 00:14:51.018 "name": "BaseBdev2", 00:14:51.018 "uuid": "088d51e7-ba13-5a26-9691-39c33d3e5501", 00:14:51.018 "is_configured": true, 00:14:51.018 "data_offset": 2048, 00:14:51.018 "data_size": 63488 00:14:51.018 }, 00:14:51.018 { 00:14:51.018 "name": "BaseBdev3", 00:14:51.018 "uuid": "5e5bacf3-3295-50ae-a541-ebffe84865cd", 00:14:51.018 "is_configured": true, 00:14:51.018 "data_offset": 2048, 00:14:51.018 "data_size": 63488 00:14:51.018 }, 00:14:51.018 { 00:14:51.018 "name": "BaseBdev4", 00:14:51.018 "uuid": "815d0abd-f5e3-5831-8084-666f6f6d1ea1", 00:14:51.018 "is_configured": true, 00:14:51.018 "data_offset": 2048, 00:14:51.018 "data_size": 63488 00:14:51.018 } 00:14:51.018 ] 00:14:51.018 }' 00:14:51.018 14:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.018 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.585 [2024-11-04 14:40:50.491046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:51.585 [2024-11-04 14:40:50.491082] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:51.585 [2024-11-04 14:40:50.494421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.585 [2024-11-04 14:40:50.494686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.585 [2024-11-04 14:40:50.494847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.585 [2024-11-04 14:40:50.494865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:51.585 { 00:14:51.585 "results": [ 00:14:51.585 { 00:14:51.585 "job": "raid_bdev1", 00:14:51.585 "core_mask": "0x1", 00:14:51.585 "workload": "randrw", 00:14:51.585 "percentage": 50, 00:14:51.585 "status": "finished", 00:14:51.585 "queue_depth": 1, 00:14:51.585 "io_size": 131072, 00:14:51.585 "runtime": 1.415361, 00:14:51.585 "iops": 8005.0248664475, 00:14:51.585 "mibps": 1000.6281083059375, 00:14:51.585 "io_failed": 0, 00:14:51.585 "io_timeout": 0, 00:14:51.585 "avg_latency_us": 120.3416910856134, 00:14:51.585 "min_latency_us": 43.054545454545455, 00:14:51.585 "max_latency_us": 1854.370909090909 00:14:51.585 } 00:14:51.585 ], 00:14:51.585 "core_count": 1 00:14:51.585 } 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75326 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75326 ']' 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75326 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75326 00:14:51.585 killing process with pid 75326 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75326' 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75326 00:14:51.585 [2024-11-04 14:40:50.525557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.585 14:40:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75326 00:14:51.843 [2024-11-04 14:40:50.819816] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.218 14:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gXqXOhpx1e 00:14:53.218 14:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:53.218 14:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:53.218 ************************************ 00:14:53.218 END TEST raid_write_error_test 00:14:53.218 ************************************ 00:14:53.218 14:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:53.218 14:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:53.218 14:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:53.218 14:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:53.219 14:40:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:53.219 00:14:53.219 real 0m4.927s 00:14:53.219 user 0m6.071s 00:14:53.219 sys 0m0.626s 00:14:53.219 14:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:53.219 14:40:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.219 14:40:51 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:53.219 14:40:51 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:53.219 14:40:51 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:53.219 14:40:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:53.219 14:40:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:53.219 14:40:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.219 ************************************ 00:14:53.219 START TEST raid_rebuild_test 00:14:53.219 ************************************ 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75475 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75475 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75475 ']' 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:53.219 14:40:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.219 [2024-11-04 14:40:52.066640] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:14:53.219 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:53.219 Zero copy mechanism will not be used. 00:14:53.219 [2024-11-04 14:40:52.067047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75475 ] 00:14:53.219 [2024-11-04 14:40:52.248968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.478 [2024-11-04 14:40:52.403614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.754 [2024-11-04 14:40:52.627732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.754 [2024-11-04 14:40:52.628059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.012 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:54.012 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:14:54.012 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.012 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:54.012 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.012 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.272 BaseBdev1_malloc 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.272 [2024-11-04 14:40:53.167647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:54.272 [2024-11-04 14:40:53.167737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.272 [2024-11-04 14:40:53.167772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:54.272 [2024-11-04 14:40:53.167790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.272 [2024-11-04 14:40:53.170613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.272 [2024-11-04 14:40:53.170665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:54.272 BaseBdev1 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.272 BaseBdev2_malloc 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.272 [2024-11-04 14:40:53.215164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:54.272 [2024-11-04 14:40:53.215241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.272 [2024-11-04 14:40:53.215271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:54.272 [2024-11-04 14:40:53.215291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.272 [2024-11-04 14:40:53.218070] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.272 [2024-11-04 14:40:53.218122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:54.272 BaseBdev2 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.272 spare_malloc 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.272 spare_delay 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.272 [2024-11-04 14:40:53.283461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:54.272 [2024-11-04 14:40:53.283689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.272 [2024-11-04 14:40:53.283729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:54.272 [2024-11-04 14:40:53.283748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.272 [2024-11-04 14:40:53.286552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.272 [2024-11-04 14:40:53.286604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:54.272 spare 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.272 [2024-11-04 14:40:53.291573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.272 [2024-11-04 14:40:53.293988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.272 [2024-11-04 14:40:53.294123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:54.272 [2024-11-04 14:40:53.294146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:54.272 [2024-11-04 14:40:53.294468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:54.272 [2024-11-04 14:40:53.294667] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:54.272 [2024-11-04 14:40:53.294685] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:54.272 [2024-11-04 14:40:53.294871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.272 "name": "raid_bdev1", 00:14:54.272 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:14:54.272 "strip_size_kb": 0, 00:14:54.272 "state": "online", 00:14:54.272 "raid_level": "raid1", 00:14:54.272 "superblock": false, 00:14:54.272 "num_base_bdevs": 2, 00:14:54.272 "num_base_bdevs_discovered": 2, 00:14:54.272 "num_base_bdevs_operational": 2, 00:14:54.272 "base_bdevs_list": [ 00:14:54.272 { 00:14:54.272 "name": "BaseBdev1", 00:14:54.272 "uuid": "adbd1414-9ac5-535a-a152-8073921cb687", 00:14:54.272 "is_configured": true, 00:14:54.272 "data_offset": 0, 00:14:54.272 "data_size": 65536 00:14:54.272 }, 00:14:54.272 { 00:14:54.272 "name": "BaseBdev2", 00:14:54.272 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:14:54.272 "is_configured": true, 00:14:54.272 "data_offset": 0, 00:14:54.272 "data_size": 65536 00:14:54.272 } 00:14:54.272 ] 00:14:54.272 }' 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.272 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.839 [2024-11-04 14:40:53.836101] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:54.839 14:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:55.406 [2024-11-04 14:40:54.219890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:55.406 /dev/nbd0 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.406 1+0 records in 00:14:55.406 1+0 records out 00:14:55.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357479 s, 11.5 MB/s 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:55.406 14:40:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:01.966 65536+0 records in 00:15:01.966 65536+0 records out 00:15:01.966 33554432 bytes (34 MB, 32 MiB) copied, 6.62944 s, 5.1 MB/s 00:15:01.966 14:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:01.966 14:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.966 14:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:01.966 14:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.966 14:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:01.966 14:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.966 14:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:02.225 [2024-11-04 14:41:01.228819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.225 [2024-11-04 14:41:01.245340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.225 "name": "raid_bdev1", 00:15:02.225 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:15:02.225 "strip_size_kb": 0, 00:15:02.225 "state": "online", 00:15:02.225 "raid_level": "raid1", 00:15:02.225 "superblock": false, 00:15:02.225 "num_base_bdevs": 2, 00:15:02.225 "num_base_bdevs_discovered": 1, 00:15:02.225 "num_base_bdevs_operational": 1, 00:15:02.225 "base_bdevs_list": [ 00:15:02.225 { 00:15:02.225 "name": null, 00:15:02.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.225 "is_configured": false, 00:15:02.225 "data_offset": 0, 00:15:02.225 "data_size": 65536 00:15:02.225 }, 00:15:02.225 { 00:15:02.225 "name": "BaseBdev2", 00:15:02.225 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:15:02.225 "is_configured": true, 00:15:02.225 "data_offset": 0, 00:15:02.225 "data_size": 65536 00:15:02.225 } 00:15:02.225 ] 00:15:02.225 }' 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.225 14:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.821 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.821 14:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.821 14:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.821 [2024-11-04 14:41:01.733540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.821 [2024-11-04 14:41:01.750406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:15:02.821 14:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.821 14:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:02.821 [2024-11-04 14:41:01.753206] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.758 "name": "raid_bdev1", 00:15:03.758 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:15:03.758 "strip_size_kb": 0, 00:15:03.758 "state": "online", 00:15:03.758 "raid_level": "raid1", 00:15:03.758 "superblock": false, 00:15:03.758 "num_base_bdevs": 2, 00:15:03.758 "num_base_bdevs_discovered": 2, 00:15:03.758 "num_base_bdevs_operational": 2, 00:15:03.758 "process": { 00:15:03.758 "type": "rebuild", 00:15:03.758 "target": "spare", 00:15:03.758 "progress": { 00:15:03.758 "blocks": 20480, 00:15:03.758 "percent": 31 00:15:03.758 } 00:15:03.758 }, 00:15:03.758 "base_bdevs_list": [ 00:15:03.758 { 00:15:03.758 "name": "spare", 00:15:03.758 "uuid": "66992c9b-e7fb-52b7-a113-79ffe539fb55", 00:15:03.758 "is_configured": true, 00:15:03.758 "data_offset": 0, 00:15:03.758 "data_size": 65536 00:15:03.758 }, 00:15:03.758 { 00:15:03.758 "name": "BaseBdev2", 00:15:03.758 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:15:03.758 "is_configured": true, 00:15:03.758 "data_offset": 0, 00:15:03.758 "data_size": 65536 00:15:03.758 } 00:15:03.758 ] 00:15:03.758 }' 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.758 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.017 [2024-11-04 14:41:02.926403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.017 [2024-11-04 14:41:02.962438] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:04.017 [2024-11-04 14:41:02.962544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.017 [2024-11-04 14:41:02.962570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.017 [2024-11-04 14:41:02.962586] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.017 14:41:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.017 14:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.017 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.017 "name": "raid_bdev1", 00:15:04.017 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:15:04.017 "strip_size_kb": 0, 00:15:04.017 "state": "online", 00:15:04.017 "raid_level": "raid1", 00:15:04.017 "superblock": false, 00:15:04.017 "num_base_bdevs": 2, 00:15:04.017 "num_base_bdevs_discovered": 1, 00:15:04.017 "num_base_bdevs_operational": 1, 00:15:04.017 "base_bdevs_list": [ 00:15:04.017 { 00:15:04.017 "name": null, 00:15:04.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.017 "is_configured": false, 00:15:04.017 "data_offset": 0, 00:15:04.017 "data_size": 65536 00:15:04.017 }, 00:15:04.017 { 00:15:04.017 "name": "BaseBdev2", 00:15:04.017 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:15:04.017 "is_configured": true, 00:15:04.017 "data_offset": 0, 00:15:04.017 "data_size": 65536 00:15:04.017 } 00:15:04.017 ] 00:15:04.017 }' 00:15:04.017 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.017 14:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.584 "name": "raid_bdev1", 00:15:04.584 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:15:04.584 "strip_size_kb": 0, 00:15:04.584 "state": "online", 00:15:04.584 "raid_level": "raid1", 00:15:04.584 "superblock": false, 00:15:04.584 "num_base_bdevs": 2, 00:15:04.584 "num_base_bdevs_discovered": 1, 00:15:04.584 "num_base_bdevs_operational": 1, 00:15:04.584 "base_bdevs_list": [ 00:15:04.584 { 00:15:04.584 "name": null, 00:15:04.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.584 "is_configured": false, 00:15:04.584 "data_offset": 0, 00:15:04.584 "data_size": 65536 00:15:04.584 }, 00:15:04.584 { 00:15:04.584 "name": "BaseBdev2", 00:15:04.584 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:15:04.584 "is_configured": true, 00:15:04.584 "data_offset": 0, 00:15:04.584 "data_size": 65536 00:15:04.584 } 00:15:04.584 ] 00:15:04.584 }' 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.584 [2024-11-04 14:41:03.662634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.584 [2024-11-04 14:41:03.679610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.584 14:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:04.584 [2024-11-04 14:41:03.682150] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.966 "name": "raid_bdev1", 00:15:05.966 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:15:05.966 "strip_size_kb": 0, 00:15:05.966 "state": "online", 00:15:05.966 "raid_level": "raid1", 00:15:05.966 "superblock": false, 00:15:05.966 "num_base_bdevs": 2, 00:15:05.966 "num_base_bdevs_discovered": 2, 00:15:05.966 "num_base_bdevs_operational": 2, 00:15:05.966 "process": { 00:15:05.966 "type": "rebuild", 00:15:05.966 "target": "spare", 00:15:05.966 "progress": { 00:15:05.966 "blocks": 20480, 00:15:05.966 "percent": 31 00:15:05.966 } 00:15:05.966 }, 00:15:05.966 "base_bdevs_list": [ 00:15:05.966 { 00:15:05.966 "name": "spare", 00:15:05.966 "uuid": "66992c9b-e7fb-52b7-a113-79ffe539fb55", 00:15:05.966 "is_configured": true, 00:15:05.966 "data_offset": 0, 00:15:05.966 "data_size": 65536 00:15:05.966 }, 00:15:05.966 { 00:15:05.966 "name": "BaseBdev2", 00:15:05.966 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:15:05.966 "is_configured": true, 00:15:05.966 "data_offset": 0, 00:15:05.966 "data_size": 65536 00:15:05.966 } 00:15:05.966 ] 00:15:05.966 }' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.966 "name": "raid_bdev1", 00:15:05.966 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:15:05.966 "strip_size_kb": 0, 00:15:05.966 "state": "online", 00:15:05.966 "raid_level": "raid1", 00:15:05.966 "superblock": false, 00:15:05.966 "num_base_bdevs": 2, 00:15:05.966 "num_base_bdevs_discovered": 2, 00:15:05.966 "num_base_bdevs_operational": 2, 00:15:05.966 "process": { 00:15:05.966 "type": "rebuild", 00:15:05.966 "target": "spare", 00:15:05.966 "progress": { 00:15:05.966 "blocks": 22528, 00:15:05.966 "percent": 34 00:15:05.966 } 00:15:05.966 }, 00:15:05.966 "base_bdevs_list": [ 00:15:05.966 { 00:15:05.966 "name": "spare", 00:15:05.966 "uuid": "66992c9b-e7fb-52b7-a113-79ffe539fb55", 00:15:05.966 "is_configured": true, 00:15:05.966 "data_offset": 0, 00:15:05.966 "data_size": 65536 00:15:05.966 }, 00:15:05.966 { 00:15:05.966 "name": "BaseBdev2", 00:15:05.966 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:15:05.966 "is_configured": true, 00:15:05.966 "data_offset": 0, 00:15:05.966 "data_size": 65536 00:15:05.966 } 00:15:05.966 ] 00:15:05.966 }' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.966 14:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.933 14:41:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.933 14:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.933 "name": "raid_bdev1", 00:15:06.933 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:15:06.933 "strip_size_kb": 0, 00:15:06.933 "state": "online", 00:15:06.933 "raid_level": "raid1", 00:15:06.933 "superblock": false, 00:15:06.933 "num_base_bdevs": 2, 00:15:06.933 "num_base_bdevs_discovered": 2, 00:15:06.933 "num_base_bdevs_operational": 2, 00:15:06.933 "process": { 00:15:06.933 "type": "rebuild", 00:15:06.933 "target": "spare", 00:15:06.933 "progress": { 00:15:06.933 "blocks": 45056, 00:15:06.933 "percent": 68 00:15:06.933 } 00:15:06.933 }, 00:15:06.933 "base_bdevs_list": [ 00:15:06.933 { 00:15:06.933 "name": "spare", 00:15:06.933 "uuid": "66992c9b-e7fb-52b7-a113-79ffe539fb55", 00:15:06.933 "is_configured": true, 00:15:06.933 "data_offset": 0, 00:15:06.933 "data_size": 65536 00:15:06.933 }, 00:15:06.933 { 00:15:06.933 "name": "BaseBdev2", 00:15:06.933 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:15:06.933 "is_configured": true, 00:15:06.933 "data_offset": 0, 00:15:06.933 "data_size": 65536 00:15:06.933 } 00:15:06.933 ] 00:15:06.933 }' 00:15:06.933 14:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.192 14:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.192 14:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.192 14:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.192 14:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.142 [2024-11-04 14:41:06.905705] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:08.142 [2024-11-04 14:41:06.905817] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:08.142 [2024-11-04 14:41:06.905890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.142 "name": "raid_bdev1", 00:15:08.142 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:15:08.142 "strip_size_kb": 0, 00:15:08.142 "state": "online", 00:15:08.142 "raid_level": "raid1", 00:15:08.142 "superblock": false, 00:15:08.142 "num_base_bdevs": 2, 00:15:08.142 "num_base_bdevs_discovered": 2, 00:15:08.142 "num_base_bdevs_operational": 2, 00:15:08.142 "base_bdevs_list": [ 00:15:08.142 { 00:15:08.142 "name": "spare", 00:15:08.142 "uuid": "66992c9b-e7fb-52b7-a113-79ffe539fb55", 00:15:08.142 "is_configured": true, 00:15:08.142 "data_offset": 0, 00:15:08.142 "data_size": 65536 00:15:08.142 }, 00:15:08.142 { 00:15:08.142 "name": "BaseBdev2", 00:15:08.142 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:15:08.142 "is_configured": true, 00:15:08.142 "data_offset": 0, 00:15:08.142 "data_size": 65536 00:15:08.142 } 00:15:08.142 ] 00:15:08.142 }' 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:08.142 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.421 "name": "raid_bdev1", 00:15:08.421 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:15:08.421 "strip_size_kb": 0, 00:15:08.421 "state": "online", 00:15:08.421 "raid_level": "raid1", 00:15:08.421 "superblock": false, 00:15:08.421 "num_base_bdevs": 2, 00:15:08.421 "num_base_bdevs_discovered": 2, 00:15:08.421 "num_base_bdevs_operational": 2, 00:15:08.421 "base_bdevs_list": [ 00:15:08.421 { 00:15:08.421 "name": "spare", 00:15:08.421 "uuid": "66992c9b-e7fb-52b7-a113-79ffe539fb55", 00:15:08.421 "is_configured": true, 00:15:08.421 "data_offset": 0, 00:15:08.421 "data_size": 65536 00:15:08.421 }, 00:15:08.421 { 00:15:08.421 "name": "BaseBdev2", 00:15:08.421 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:15:08.421 "is_configured": true, 00:15:08.421 "data_offset": 0, 00:15:08.421 "data_size": 65536 00:15:08.421 } 00:15:08.421 ] 00:15:08.421 }' 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.421 "name": "raid_bdev1", 00:15:08.421 "uuid": "bb726291-53c2-4f48-a655-7880a32263f9", 00:15:08.421 "strip_size_kb": 0, 00:15:08.421 "state": "online", 00:15:08.421 "raid_level": "raid1", 00:15:08.421 "superblock": false, 00:15:08.421 "num_base_bdevs": 2, 00:15:08.421 "num_base_bdevs_discovered": 2, 00:15:08.421 "num_base_bdevs_operational": 2, 00:15:08.421 "base_bdevs_list": [ 00:15:08.421 { 00:15:08.421 "name": "spare", 00:15:08.421 "uuid": "66992c9b-e7fb-52b7-a113-79ffe539fb55", 00:15:08.421 "is_configured": true, 00:15:08.421 "data_offset": 0, 00:15:08.421 "data_size": 65536 00:15:08.421 }, 00:15:08.421 { 00:15:08.421 "name": "BaseBdev2", 00:15:08.421 "uuid": "271199d2-5424-5086-9ba3-33ae261d5f2f", 00:15:08.421 "is_configured": true, 00:15:08.421 "data_offset": 0, 00:15:08.421 "data_size": 65536 00:15:08.421 } 00:15:08.421 ] 00:15:08.421 }' 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.421 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.987 [2024-11-04 14:41:07.909597] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:08.987 [2024-11-04 14:41:07.909644] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:08.987 [2024-11-04 14:41:07.909754] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:08.987 [2024-11-04 14:41:07.909854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:08.987 [2024-11-04 14:41:07.909872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:08.987 14:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:08.988 14:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:08.988 14:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:08.988 14:41:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:09.245 /dev/nbd0 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.245 1+0 records in 00:15:09.245 1+0 records out 00:15:09.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308254 s, 13.3 MB/s 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:09.245 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:09.503 /dev/nbd1 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.503 1+0 records in 00:15:09.503 1+0 records out 00:15:09.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404747 s, 10.1 MB/s 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:09.503 14:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:09.759 14:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:09.759 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.759 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:09.760 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:09.760 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:09.760 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:09.760 14:41:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:10.017 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:10.017 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:10.017 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:10.017 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.017 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.017 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:10.017 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:10.017 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.017 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.017 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75475 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75475 ']' 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75475 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75475 00:15:10.583 killing process with pid 75475 00:15:10.583 Received shutdown signal, test time was about 60.000000 seconds 00:15:10.583 00:15:10.583 Latency(us) 00:15:10.583 [2024-11-04T14:41:09.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.583 [2024-11-04T14:41:09.706Z] =================================================================================================================== 00:15:10.583 [2024-11-04T14:41:09.706Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75475' 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75475 00:15:10.583 14:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75475 00:15:10.583 [2024-11-04 14:41:09.485770] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.841 [2024-11-04 14:41:09.755944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:11.773 00:15:11.773 real 0m18.804s 00:15:11.773 user 0m21.402s 00:15:11.773 sys 0m3.730s 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:11.773 ************************************ 00:15:11.773 END TEST raid_rebuild_test 00:15:11.773 ************************************ 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.773 14:41:10 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:15:11.773 14:41:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:11.773 14:41:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:11.773 14:41:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:11.773 ************************************ 00:15:11.773 START TEST raid_rebuild_test_sb 00:15:11.773 ************************************ 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75921 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75921 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75921 ']' 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:11.773 14:41:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.032 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:12.032 Zero copy mechanism will not be used. 00:15:12.032 [2024-11-04 14:41:10.938890] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:15:12.032 [2024-11-04 14:41:10.939064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75921 ] 00:15:12.032 [2024-11-04 14:41:11.108816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.289 [2024-11-04 14:41:11.238646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.546 [2024-11-04 14:41:11.439703] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.546 [2024-11-04 14:41:11.439763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.805 14:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:12.805 14:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:12.805 14:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:12.805 14:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:12.805 14:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.805 14:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.064 BaseBdev1_malloc 00:15:13.064 14:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.064 14:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:13.064 14:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.064 14:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.064 [2024-11-04 14:41:11.959347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:13.064 [2024-11-04 14:41:11.959440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.064 [2024-11-04 14:41:11.959471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:13.064 [2024-11-04 14:41:11.959490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.064 [2024-11-04 14:41:11.962297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.064 [2024-11-04 14:41:11.962502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:13.064 BaseBdev1 00:15:13.064 14:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.064 14:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:13.064 14:41:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:13.064 14:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.064 14:41:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.064 BaseBdev2_malloc 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.064 [2024-11-04 14:41:12.011101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:13.064 [2024-11-04 14:41:12.011182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.064 [2024-11-04 14:41:12.011211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:13.064 [2024-11-04 14:41:12.011232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.064 [2024-11-04 14:41:12.013943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.064 [2024-11-04 14:41:12.014003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:13.064 BaseBdev2 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.064 spare_malloc 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.064 spare_delay 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.064 [2024-11-04 14:41:12.080950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:13.064 [2024-11-04 14:41:12.081028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.064 [2024-11-04 14:41:12.081063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:13.064 [2024-11-04 14:41:12.081083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.064 [2024-11-04 14:41:12.083908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.064 [2024-11-04 14:41:12.083986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:13.064 spare 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.064 [2024-11-04 14:41:12.089016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.064 [2024-11-04 14:41:12.091412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.064 [2024-11-04 14:41:12.091635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:13.064 [2024-11-04 14:41:12.091660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:13.064 [2024-11-04 14:41:12.092019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:13.064 [2024-11-04 14:41:12.092237] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:13.064 [2024-11-04 14:41:12.092253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:13.064 [2024-11-04 14:41:12.092447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.064 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.065 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.065 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.065 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.065 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.065 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.065 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.065 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.065 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.065 "name": "raid_bdev1", 00:15:13.065 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:13.065 "strip_size_kb": 0, 00:15:13.065 "state": "online", 00:15:13.065 "raid_level": "raid1", 00:15:13.065 "superblock": true, 00:15:13.065 "num_base_bdevs": 2, 00:15:13.065 "num_base_bdevs_discovered": 2, 00:15:13.065 "num_base_bdevs_operational": 2, 00:15:13.065 "base_bdevs_list": [ 00:15:13.065 { 00:15:13.065 "name": "BaseBdev1", 00:15:13.065 "uuid": "63e9a503-e303-5e2d-b444-f75b9cf309b0", 00:15:13.065 "is_configured": true, 00:15:13.065 "data_offset": 2048, 00:15:13.065 "data_size": 63488 00:15:13.065 }, 00:15:13.065 { 00:15:13.065 "name": "BaseBdev2", 00:15:13.065 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:13.065 "is_configured": true, 00:15:13.065 "data_offset": 2048, 00:15:13.065 "data_size": 63488 00:15:13.065 } 00:15:13.065 ] 00:15:13.065 }' 00:15:13.065 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.065 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.632 [2024-11-04 14:41:12.573448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.632 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:13.893 [2024-11-04 14:41:12.909262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:13.893 /dev/nbd0 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.893 1+0 records in 00:15:13.893 1+0 records out 00:15:13.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518441 s, 7.9 MB/s 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:13.893 14:41:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:20.470 63488+0 records in 00:15:20.470 63488+0 records out 00:15:20.470 32505856 bytes (33 MB, 31 MiB) copied, 5.93153 s, 5.5 MB/s 00:15:20.470 14:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:20.470 14:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:20.470 14:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:20.470 14:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:20.470 14:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:20.470 14:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:20.470 14:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:20.470 [2024-11-04 14:41:19.244621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.470 [2024-11-04 14:41:19.276772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.470 "name": "raid_bdev1", 00:15:20.470 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:20.470 "strip_size_kb": 0, 00:15:20.470 "state": "online", 00:15:20.470 "raid_level": "raid1", 00:15:20.470 "superblock": true, 00:15:20.470 "num_base_bdevs": 2, 00:15:20.470 "num_base_bdevs_discovered": 1, 00:15:20.470 "num_base_bdevs_operational": 1, 00:15:20.470 "base_bdevs_list": [ 00:15:20.470 { 00:15:20.470 "name": null, 00:15:20.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.470 "is_configured": false, 00:15:20.470 "data_offset": 0, 00:15:20.470 "data_size": 63488 00:15:20.470 }, 00:15:20.470 { 00:15:20.470 "name": "BaseBdev2", 00:15:20.470 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:20.470 "is_configured": true, 00:15:20.470 "data_offset": 2048, 00:15:20.470 "data_size": 63488 00:15:20.470 } 00:15:20.470 ] 00:15:20.470 }' 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.470 14:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.728 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:20.728 14:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.728 14:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.728 [2024-11-04 14:41:19.816879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.728 [2024-11-04 14:41:19.833148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:15:20.728 14:41:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.728 14:41:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:20.728 [2024-11-04 14:41:19.835727] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.102 "name": "raid_bdev1", 00:15:22.102 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:22.102 "strip_size_kb": 0, 00:15:22.102 "state": "online", 00:15:22.102 "raid_level": "raid1", 00:15:22.102 "superblock": true, 00:15:22.102 "num_base_bdevs": 2, 00:15:22.102 "num_base_bdevs_discovered": 2, 00:15:22.102 "num_base_bdevs_operational": 2, 00:15:22.102 "process": { 00:15:22.102 "type": "rebuild", 00:15:22.102 "target": "spare", 00:15:22.102 "progress": { 00:15:22.102 "blocks": 20480, 00:15:22.102 "percent": 32 00:15:22.102 } 00:15:22.102 }, 00:15:22.102 "base_bdevs_list": [ 00:15:22.102 { 00:15:22.102 "name": "spare", 00:15:22.102 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:22.102 "is_configured": true, 00:15:22.102 "data_offset": 2048, 00:15:22.102 "data_size": 63488 00:15:22.102 }, 00:15:22.102 { 00:15:22.102 "name": "BaseBdev2", 00:15:22.102 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:22.102 "is_configured": true, 00:15:22.102 "data_offset": 2048, 00:15:22.102 "data_size": 63488 00:15:22.102 } 00:15:22.102 ] 00:15:22.102 }' 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.102 14:41:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.102 [2024-11-04 14:41:21.013259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.102 [2024-11-04 14:41:21.044475] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:22.102 [2024-11-04 14:41:21.044596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.102 [2024-11-04 14:41:21.044622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.102 [2024-11-04 14:41:21.044641] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.102 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.103 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.103 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.103 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.103 "name": "raid_bdev1", 00:15:22.103 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:22.103 "strip_size_kb": 0, 00:15:22.103 "state": "online", 00:15:22.103 "raid_level": "raid1", 00:15:22.103 "superblock": true, 00:15:22.103 "num_base_bdevs": 2, 00:15:22.103 "num_base_bdevs_discovered": 1, 00:15:22.103 "num_base_bdevs_operational": 1, 00:15:22.103 "base_bdevs_list": [ 00:15:22.103 { 00:15:22.103 "name": null, 00:15:22.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.103 "is_configured": false, 00:15:22.103 "data_offset": 0, 00:15:22.103 "data_size": 63488 00:15:22.103 }, 00:15:22.103 { 00:15:22.103 "name": "BaseBdev2", 00:15:22.103 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:22.103 "is_configured": true, 00:15:22.103 "data_offset": 2048, 00:15:22.103 "data_size": 63488 00:15:22.103 } 00:15:22.103 ] 00:15:22.103 }' 00:15:22.103 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.103 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.667 "name": "raid_bdev1", 00:15:22.667 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:22.667 "strip_size_kb": 0, 00:15:22.667 "state": "online", 00:15:22.667 "raid_level": "raid1", 00:15:22.667 "superblock": true, 00:15:22.667 "num_base_bdevs": 2, 00:15:22.667 "num_base_bdevs_discovered": 1, 00:15:22.667 "num_base_bdevs_operational": 1, 00:15:22.667 "base_bdevs_list": [ 00:15:22.667 { 00:15:22.667 "name": null, 00:15:22.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.667 "is_configured": false, 00:15:22.667 "data_offset": 0, 00:15:22.667 "data_size": 63488 00:15:22.667 }, 00:15:22.667 { 00:15:22.667 "name": "BaseBdev2", 00:15:22.667 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:22.667 "is_configured": true, 00:15:22.667 "data_offset": 2048, 00:15:22.667 "data_size": 63488 00:15:22.667 } 00:15:22.667 ] 00:15:22.667 }' 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.667 [2024-11-04 14:41:21.756984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:22.667 [2024-11-04 14:41:21.772350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.667 14:41:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:22.667 [2024-11-04 14:41:21.774811] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.041 "name": "raid_bdev1", 00:15:24.041 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:24.041 "strip_size_kb": 0, 00:15:24.041 "state": "online", 00:15:24.041 "raid_level": "raid1", 00:15:24.041 "superblock": true, 00:15:24.041 "num_base_bdevs": 2, 00:15:24.041 "num_base_bdevs_discovered": 2, 00:15:24.041 "num_base_bdevs_operational": 2, 00:15:24.041 "process": { 00:15:24.041 "type": "rebuild", 00:15:24.041 "target": "spare", 00:15:24.041 "progress": { 00:15:24.041 "blocks": 20480, 00:15:24.041 "percent": 32 00:15:24.041 } 00:15:24.041 }, 00:15:24.041 "base_bdevs_list": [ 00:15:24.041 { 00:15:24.041 "name": "spare", 00:15:24.041 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:24.041 "is_configured": true, 00:15:24.041 "data_offset": 2048, 00:15:24.041 "data_size": 63488 00:15:24.041 }, 00:15:24.041 { 00:15:24.041 "name": "BaseBdev2", 00:15:24.041 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:24.041 "is_configured": true, 00:15:24.041 "data_offset": 2048, 00:15:24.041 "data_size": 63488 00:15:24.041 } 00:15:24.041 ] 00:15:24.041 }' 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:24.041 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=415 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.041 14:41:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.041 14:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.041 "name": "raid_bdev1", 00:15:24.041 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:24.041 "strip_size_kb": 0, 00:15:24.041 "state": "online", 00:15:24.041 "raid_level": "raid1", 00:15:24.041 "superblock": true, 00:15:24.041 "num_base_bdevs": 2, 00:15:24.041 "num_base_bdevs_discovered": 2, 00:15:24.041 "num_base_bdevs_operational": 2, 00:15:24.041 "process": { 00:15:24.041 "type": "rebuild", 00:15:24.041 "target": "spare", 00:15:24.041 "progress": { 00:15:24.041 "blocks": 22528, 00:15:24.041 "percent": 35 00:15:24.041 } 00:15:24.041 }, 00:15:24.041 "base_bdevs_list": [ 00:15:24.041 { 00:15:24.041 "name": "spare", 00:15:24.041 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:24.041 "is_configured": true, 00:15:24.041 "data_offset": 2048, 00:15:24.041 "data_size": 63488 00:15:24.041 }, 00:15:24.041 { 00:15:24.041 "name": "BaseBdev2", 00:15:24.042 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:24.042 "is_configured": true, 00:15:24.042 "data_offset": 2048, 00:15:24.042 "data_size": 63488 00:15:24.042 } 00:15:24.042 ] 00:15:24.042 }' 00:15:24.042 14:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.042 14:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.042 14:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.042 14:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.042 14:41:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.417 "name": "raid_bdev1", 00:15:25.417 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:25.417 "strip_size_kb": 0, 00:15:25.417 "state": "online", 00:15:25.417 "raid_level": "raid1", 00:15:25.417 "superblock": true, 00:15:25.417 "num_base_bdevs": 2, 00:15:25.417 "num_base_bdevs_discovered": 2, 00:15:25.417 "num_base_bdevs_operational": 2, 00:15:25.417 "process": { 00:15:25.417 "type": "rebuild", 00:15:25.417 "target": "spare", 00:15:25.417 "progress": { 00:15:25.417 "blocks": 47104, 00:15:25.417 "percent": 74 00:15:25.417 } 00:15:25.417 }, 00:15:25.417 "base_bdevs_list": [ 00:15:25.417 { 00:15:25.417 "name": "spare", 00:15:25.417 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:25.417 "is_configured": true, 00:15:25.417 "data_offset": 2048, 00:15:25.417 "data_size": 63488 00:15:25.417 }, 00:15:25.417 { 00:15:25.417 "name": "BaseBdev2", 00:15:25.417 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:25.417 "is_configured": true, 00:15:25.417 "data_offset": 2048, 00:15:25.417 "data_size": 63488 00:15:25.417 } 00:15:25.417 ] 00:15:25.417 }' 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.417 14:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.985 [2024-11-04 14:41:24.897051] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:25.985 [2024-11-04 14:41:24.897164] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:25.985 [2024-11-04 14:41:24.897324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.243 "name": "raid_bdev1", 00:15:26.243 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:26.243 "strip_size_kb": 0, 00:15:26.243 "state": "online", 00:15:26.243 "raid_level": "raid1", 00:15:26.243 "superblock": true, 00:15:26.243 "num_base_bdevs": 2, 00:15:26.243 "num_base_bdevs_discovered": 2, 00:15:26.243 "num_base_bdevs_operational": 2, 00:15:26.243 "base_bdevs_list": [ 00:15:26.243 { 00:15:26.243 "name": "spare", 00:15:26.243 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:26.243 "is_configured": true, 00:15:26.243 "data_offset": 2048, 00:15:26.243 "data_size": 63488 00:15:26.243 }, 00:15:26.243 { 00:15:26.243 "name": "BaseBdev2", 00:15:26.243 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:26.243 "is_configured": true, 00:15:26.243 "data_offset": 2048, 00:15:26.243 "data_size": 63488 00:15:26.243 } 00:15:26.243 ] 00:15:26.243 }' 00:15:26.243 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.501 "name": "raid_bdev1", 00:15:26.501 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:26.501 "strip_size_kb": 0, 00:15:26.501 "state": "online", 00:15:26.501 "raid_level": "raid1", 00:15:26.501 "superblock": true, 00:15:26.501 "num_base_bdevs": 2, 00:15:26.501 "num_base_bdevs_discovered": 2, 00:15:26.501 "num_base_bdevs_operational": 2, 00:15:26.501 "base_bdevs_list": [ 00:15:26.501 { 00:15:26.501 "name": "spare", 00:15:26.501 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:26.501 "is_configured": true, 00:15:26.501 "data_offset": 2048, 00:15:26.501 "data_size": 63488 00:15:26.501 }, 00:15:26.501 { 00:15:26.501 "name": "BaseBdev2", 00:15:26.501 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:26.501 "is_configured": true, 00:15:26.501 "data_offset": 2048, 00:15:26.501 "data_size": 63488 00:15:26.501 } 00:15:26.501 ] 00:15:26.501 }' 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.501 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.502 14:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.760 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.760 "name": "raid_bdev1", 00:15:26.760 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:26.760 "strip_size_kb": 0, 00:15:26.760 "state": "online", 00:15:26.760 "raid_level": "raid1", 00:15:26.760 "superblock": true, 00:15:26.760 "num_base_bdevs": 2, 00:15:26.760 "num_base_bdevs_discovered": 2, 00:15:26.760 "num_base_bdevs_operational": 2, 00:15:26.760 "base_bdevs_list": [ 00:15:26.760 { 00:15:26.760 "name": "spare", 00:15:26.760 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:26.760 "is_configured": true, 00:15:26.760 "data_offset": 2048, 00:15:26.760 "data_size": 63488 00:15:26.760 }, 00:15:26.760 { 00:15:26.760 "name": "BaseBdev2", 00:15:26.760 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:26.760 "is_configured": true, 00:15:26.760 "data_offset": 2048, 00:15:26.760 "data_size": 63488 00:15:26.760 } 00:15:26.760 ] 00:15:26.760 }' 00:15:26.760 14:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.760 14:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.326 [2024-11-04 14:41:26.149258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.326 [2024-11-04 14:41:26.149304] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.326 [2024-11-04 14:41:26.149400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.326 [2024-11-04 14:41:26.149493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.326 [2024-11-04 14:41:26.149511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.326 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:27.585 /dev/nbd0 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.585 1+0 records in 00:15:27.585 1+0 records out 00:15:27.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360376 s, 11.4 MB/s 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.585 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:27.843 /dev/nbd1 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.843 1+0 records in 00:15:27.843 1+0 records out 00:15:27.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375288 s, 10.9 MB/s 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:27.843 14:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:28.101 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:28.102 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.102 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:28.102 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:28.102 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:28.102 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.102 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:28.360 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.360 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.360 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.360 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.360 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.360 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.360 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:28.360 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.360 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.360 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.618 [2024-11-04 14:41:27.679401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:28.618 [2024-11-04 14:41:27.679480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.618 [2024-11-04 14:41:27.679515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:28.618 [2024-11-04 14:41:27.679530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.618 [2024-11-04 14:41:27.682460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.618 [2024-11-04 14:41:27.682509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:28.618 [2024-11-04 14:41:27.682636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:28.618 [2024-11-04 14:41:27.682709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.618 [2024-11-04 14:41:27.682902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.618 spare 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.618 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.877 [2024-11-04 14:41:27.783059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:28.877 [2024-11-04 14:41:27.783142] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:28.877 [2024-11-04 14:41:27.783564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:15:28.877 [2024-11-04 14:41:27.783827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:28.877 [2024-11-04 14:41:27.783852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:28.877 [2024-11-04 14:41:27.784126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.877 "name": "raid_bdev1", 00:15:28.877 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:28.877 "strip_size_kb": 0, 00:15:28.877 "state": "online", 00:15:28.877 "raid_level": "raid1", 00:15:28.877 "superblock": true, 00:15:28.877 "num_base_bdevs": 2, 00:15:28.877 "num_base_bdevs_discovered": 2, 00:15:28.877 "num_base_bdevs_operational": 2, 00:15:28.877 "base_bdevs_list": [ 00:15:28.877 { 00:15:28.877 "name": "spare", 00:15:28.877 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:28.877 "is_configured": true, 00:15:28.877 "data_offset": 2048, 00:15:28.877 "data_size": 63488 00:15:28.877 }, 00:15:28.877 { 00:15:28.877 "name": "BaseBdev2", 00:15:28.877 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:28.877 "is_configured": true, 00:15:28.877 "data_offset": 2048, 00:15:28.877 "data_size": 63488 00:15:28.877 } 00:15:28.877 ] 00:15:28.877 }' 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.877 14:41:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.443 "name": "raid_bdev1", 00:15:29.443 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:29.443 "strip_size_kb": 0, 00:15:29.443 "state": "online", 00:15:29.443 "raid_level": "raid1", 00:15:29.443 "superblock": true, 00:15:29.443 "num_base_bdevs": 2, 00:15:29.443 "num_base_bdevs_discovered": 2, 00:15:29.443 "num_base_bdevs_operational": 2, 00:15:29.443 "base_bdevs_list": [ 00:15:29.443 { 00:15:29.443 "name": "spare", 00:15:29.443 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:29.443 "is_configured": true, 00:15:29.443 "data_offset": 2048, 00:15:29.443 "data_size": 63488 00:15:29.443 }, 00:15:29.443 { 00:15:29.443 "name": "BaseBdev2", 00:15:29.443 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:29.443 "is_configured": true, 00:15:29.443 "data_offset": 2048, 00:15:29.443 "data_size": 63488 00:15:29.443 } 00:15:29.443 ] 00:15:29.443 }' 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.443 [2024-11-04 14:41:28.512328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.443 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.444 "name": "raid_bdev1", 00:15:29.444 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:29.444 "strip_size_kb": 0, 00:15:29.444 "state": "online", 00:15:29.444 "raid_level": "raid1", 00:15:29.444 "superblock": true, 00:15:29.444 "num_base_bdevs": 2, 00:15:29.444 "num_base_bdevs_discovered": 1, 00:15:29.444 "num_base_bdevs_operational": 1, 00:15:29.444 "base_bdevs_list": [ 00:15:29.444 { 00:15:29.444 "name": null, 00:15:29.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.444 "is_configured": false, 00:15:29.444 "data_offset": 0, 00:15:29.444 "data_size": 63488 00:15:29.444 }, 00:15:29.444 { 00:15:29.444 "name": "BaseBdev2", 00:15:29.444 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:29.444 "is_configured": true, 00:15:29.444 "data_offset": 2048, 00:15:29.444 "data_size": 63488 00:15:29.444 } 00:15:29.444 ] 00:15:29.444 }' 00:15:29.444 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.702 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.960 14:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.960 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.960 14:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.960 [2024-11-04 14:41:28.996453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.960 [2024-11-04 14:41:28.996689] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:29.960 [2024-11-04 14:41:28.996715] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:29.960 [2024-11-04 14:41:28.996766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.960 [2024-11-04 14:41:29.012065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:15:29.960 14:41:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.960 14:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:29.960 [2024-11-04 14:41:29.014614] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.335 "name": "raid_bdev1", 00:15:31.335 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:31.335 "strip_size_kb": 0, 00:15:31.335 "state": "online", 00:15:31.335 "raid_level": "raid1", 00:15:31.335 "superblock": true, 00:15:31.335 "num_base_bdevs": 2, 00:15:31.335 "num_base_bdevs_discovered": 2, 00:15:31.335 "num_base_bdevs_operational": 2, 00:15:31.335 "process": { 00:15:31.335 "type": "rebuild", 00:15:31.335 "target": "spare", 00:15:31.335 "progress": { 00:15:31.335 "blocks": 20480, 00:15:31.335 "percent": 32 00:15:31.335 } 00:15:31.335 }, 00:15:31.335 "base_bdevs_list": [ 00:15:31.335 { 00:15:31.335 "name": "spare", 00:15:31.335 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:31.335 "is_configured": true, 00:15:31.335 "data_offset": 2048, 00:15:31.335 "data_size": 63488 00:15:31.335 }, 00:15:31.335 { 00:15:31.335 "name": "BaseBdev2", 00:15:31.335 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:31.335 "is_configured": true, 00:15:31.335 "data_offset": 2048, 00:15:31.335 "data_size": 63488 00:15:31.335 } 00:15:31.335 ] 00:15:31.335 }' 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.335 [2024-11-04 14:41:30.208190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.335 [2024-11-04 14:41:30.223266] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:31.335 [2024-11-04 14:41:30.223384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.335 [2024-11-04 14:41:30.223408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:31.335 [2024-11-04 14:41:30.223424] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.335 "name": "raid_bdev1", 00:15:31.335 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:31.335 "strip_size_kb": 0, 00:15:31.335 "state": "online", 00:15:31.335 "raid_level": "raid1", 00:15:31.335 "superblock": true, 00:15:31.335 "num_base_bdevs": 2, 00:15:31.335 "num_base_bdevs_discovered": 1, 00:15:31.335 "num_base_bdevs_operational": 1, 00:15:31.335 "base_bdevs_list": [ 00:15:31.335 { 00:15:31.335 "name": null, 00:15:31.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.335 "is_configured": false, 00:15:31.335 "data_offset": 0, 00:15:31.335 "data_size": 63488 00:15:31.335 }, 00:15:31.335 { 00:15:31.335 "name": "BaseBdev2", 00:15:31.335 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:31.335 "is_configured": true, 00:15:31.335 "data_offset": 2048, 00:15:31.335 "data_size": 63488 00:15:31.335 } 00:15:31.335 ] 00:15:31.335 }' 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.335 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.901 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.901 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.901 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.901 [2024-11-04 14:41:30.787234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.901 [2024-11-04 14:41:30.787326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.901 [2024-11-04 14:41:30.787359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:31.901 [2024-11-04 14:41:30.787376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.901 [2024-11-04 14:41:30.787992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.901 [2024-11-04 14:41:30.788037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.901 [2024-11-04 14:41:30.788157] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:31.901 [2024-11-04 14:41:30.788181] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:31.901 [2024-11-04 14:41:30.788195] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:31.901 [2024-11-04 14:41:30.788235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.901 [2024-11-04 14:41:30.803663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:31.901 spare 00:15:31.901 14:41:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.901 14:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:31.901 [2024-11-04 14:41:30.806218] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.844 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.844 "name": "raid_bdev1", 00:15:32.845 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:32.845 "strip_size_kb": 0, 00:15:32.845 "state": "online", 00:15:32.845 "raid_level": "raid1", 00:15:32.845 "superblock": true, 00:15:32.845 "num_base_bdevs": 2, 00:15:32.845 "num_base_bdevs_discovered": 2, 00:15:32.845 "num_base_bdevs_operational": 2, 00:15:32.845 "process": { 00:15:32.845 "type": "rebuild", 00:15:32.845 "target": "spare", 00:15:32.845 "progress": { 00:15:32.845 "blocks": 20480, 00:15:32.845 "percent": 32 00:15:32.845 } 00:15:32.845 }, 00:15:32.845 "base_bdevs_list": [ 00:15:32.845 { 00:15:32.845 "name": "spare", 00:15:32.845 "uuid": "99b65669-0d04-5af8-9bb2-1c04ca69253a", 00:15:32.845 "is_configured": true, 00:15:32.845 "data_offset": 2048, 00:15:32.845 "data_size": 63488 00:15:32.845 }, 00:15:32.845 { 00:15:32.845 "name": "BaseBdev2", 00:15:32.845 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:32.845 "is_configured": true, 00:15:32.845 "data_offset": 2048, 00:15:32.845 "data_size": 63488 00:15:32.845 } 00:15:32.845 ] 00:15:32.845 }' 00:15:32.845 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.845 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.845 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.845 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.845 14:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:32.845 14:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.845 14:41:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.103 [2024-11-04 14:41:31.967778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:33.103 [2024-11-04 14:41:32.015005] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:33.103 [2024-11-04 14:41:32.015117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.103 [2024-11-04 14:41:32.015146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:33.103 [2024-11-04 14:41:32.015159] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:33.103 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.104 "name": "raid_bdev1", 00:15:33.104 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:33.104 "strip_size_kb": 0, 00:15:33.104 "state": "online", 00:15:33.104 "raid_level": "raid1", 00:15:33.104 "superblock": true, 00:15:33.104 "num_base_bdevs": 2, 00:15:33.104 "num_base_bdevs_discovered": 1, 00:15:33.104 "num_base_bdevs_operational": 1, 00:15:33.104 "base_bdevs_list": [ 00:15:33.104 { 00:15:33.104 "name": null, 00:15:33.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.104 "is_configured": false, 00:15:33.104 "data_offset": 0, 00:15:33.104 "data_size": 63488 00:15:33.104 }, 00:15:33.104 { 00:15:33.104 "name": "BaseBdev2", 00:15:33.104 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:33.104 "is_configured": true, 00:15:33.104 "data_offset": 2048, 00:15:33.104 "data_size": 63488 00:15:33.104 } 00:15:33.104 ] 00:15:33.104 }' 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.104 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.670 "name": "raid_bdev1", 00:15:33.670 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:33.670 "strip_size_kb": 0, 00:15:33.670 "state": "online", 00:15:33.670 "raid_level": "raid1", 00:15:33.670 "superblock": true, 00:15:33.670 "num_base_bdevs": 2, 00:15:33.670 "num_base_bdevs_discovered": 1, 00:15:33.670 "num_base_bdevs_operational": 1, 00:15:33.670 "base_bdevs_list": [ 00:15:33.670 { 00:15:33.670 "name": null, 00:15:33.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.670 "is_configured": false, 00:15:33.670 "data_offset": 0, 00:15:33.670 "data_size": 63488 00:15:33.670 }, 00:15:33.670 { 00:15:33.670 "name": "BaseBdev2", 00:15:33.670 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:33.670 "is_configured": true, 00:15:33.670 "data_offset": 2048, 00:15:33.670 "data_size": 63488 00:15:33.670 } 00:15:33.670 ] 00:15:33.670 }' 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.670 [2024-11-04 14:41:32.714828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.670 [2024-11-04 14:41:32.714911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.670 [2024-11-04 14:41:32.714963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:33.670 [2024-11-04 14:41:32.714990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.670 [2024-11-04 14:41:32.715536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.670 [2024-11-04 14:41:32.715568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.670 [2024-11-04 14:41:32.715676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:33.670 [2024-11-04 14:41:32.715708] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:33.670 [2024-11-04 14:41:32.715721] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:33.670 [2024-11-04 14:41:32.715734] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:33.670 BaseBdev1 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.670 14:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:34.605 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:34.605 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.605 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.605 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.605 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.605 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.605 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.605 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.605 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.605 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.863 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.863 14:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.863 14:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.863 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.863 14:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.863 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.863 "name": "raid_bdev1", 00:15:34.863 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:34.863 "strip_size_kb": 0, 00:15:34.863 "state": "online", 00:15:34.863 "raid_level": "raid1", 00:15:34.863 "superblock": true, 00:15:34.863 "num_base_bdevs": 2, 00:15:34.863 "num_base_bdevs_discovered": 1, 00:15:34.863 "num_base_bdevs_operational": 1, 00:15:34.863 "base_bdevs_list": [ 00:15:34.863 { 00:15:34.863 "name": null, 00:15:34.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.863 "is_configured": false, 00:15:34.863 "data_offset": 0, 00:15:34.863 "data_size": 63488 00:15:34.863 }, 00:15:34.863 { 00:15:34.863 "name": "BaseBdev2", 00:15:34.863 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:34.863 "is_configured": true, 00:15:34.863 "data_offset": 2048, 00:15:34.863 "data_size": 63488 00:15:34.864 } 00:15:34.864 ] 00:15:34.864 }' 00:15:34.864 14:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.864 14:41:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.463 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.463 "name": "raid_bdev1", 00:15:35.463 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:35.463 "strip_size_kb": 0, 00:15:35.463 "state": "online", 00:15:35.463 "raid_level": "raid1", 00:15:35.463 "superblock": true, 00:15:35.463 "num_base_bdevs": 2, 00:15:35.463 "num_base_bdevs_discovered": 1, 00:15:35.463 "num_base_bdevs_operational": 1, 00:15:35.463 "base_bdevs_list": [ 00:15:35.463 { 00:15:35.463 "name": null, 00:15:35.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.463 "is_configured": false, 00:15:35.464 "data_offset": 0, 00:15:35.464 "data_size": 63488 00:15:35.464 }, 00:15:35.464 { 00:15:35.464 "name": "BaseBdev2", 00:15:35.464 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:35.464 "is_configured": true, 00:15:35.464 "data_offset": 2048, 00:15:35.464 "data_size": 63488 00:15:35.464 } 00:15:35.464 ] 00:15:35.464 }' 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.464 [2024-11-04 14:41:34.447427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.464 [2024-11-04 14:41:34.447620] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:35.464 [2024-11-04 14:41:34.447644] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:35.464 request: 00:15:35.464 { 00:15:35.464 "base_bdev": "BaseBdev1", 00:15:35.464 "raid_bdev": "raid_bdev1", 00:15:35.464 "method": "bdev_raid_add_base_bdev", 00:15:35.464 "req_id": 1 00:15:35.464 } 00:15:35.464 Got JSON-RPC error response 00:15:35.464 response: 00:15:35.464 { 00:15:35.464 "code": -22, 00:15:35.464 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:35.464 } 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.464 14:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.400 14:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.658 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.658 "name": "raid_bdev1", 00:15:36.658 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:36.658 "strip_size_kb": 0, 00:15:36.658 "state": "online", 00:15:36.658 "raid_level": "raid1", 00:15:36.658 "superblock": true, 00:15:36.658 "num_base_bdevs": 2, 00:15:36.658 "num_base_bdevs_discovered": 1, 00:15:36.658 "num_base_bdevs_operational": 1, 00:15:36.658 "base_bdevs_list": [ 00:15:36.658 { 00:15:36.658 "name": null, 00:15:36.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.658 "is_configured": false, 00:15:36.658 "data_offset": 0, 00:15:36.658 "data_size": 63488 00:15:36.658 }, 00:15:36.658 { 00:15:36.658 "name": "BaseBdev2", 00:15:36.658 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:36.658 "is_configured": true, 00:15:36.658 "data_offset": 2048, 00:15:36.658 "data_size": 63488 00:15:36.658 } 00:15:36.658 ] 00:15:36.658 }' 00:15:36.658 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.658 14:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.917 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.917 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.917 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.917 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.917 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.917 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.917 14:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.917 14:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.917 14:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.917 14:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.917 14:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.917 "name": "raid_bdev1", 00:15:36.917 "uuid": "44a91767-7c69-442e-8103-017c920f2c64", 00:15:36.917 "strip_size_kb": 0, 00:15:36.917 "state": "online", 00:15:36.917 "raid_level": "raid1", 00:15:36.917 "superblock": true, 00:15:36.917 "num_base_bdevs": 2, 00:15:36.917 "num_base_bdevs_discovered": 1, 00:15:36.917 "num_base_bdevs_operational": 1, 00:15:36.917 "base_bdevs_list": [ 00:15:36.917 { 00:15:36.917 "name": null, 00:15:36.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.917 "is_configured": false, 00:15:36.917 "data_offset": 0, 00:15:36.917 "data_size": 63488 00:15:36.917 }, 00:15:36.917 { 00:15:36.917 "name": "BaseBdev2", 00:15:36.917 "uuid": "ebbcb553-369f-54e8-b34a-7adfb52a2639", 00:15:36.917 "is_configured": true, 00:15:36.917 "data_offset": 2048, 00:15:36.917 "data_size": 63488 00:15:36.917 } 00:15:36.917 ] 00:15:36.917 }' 00:15:36.917 14:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75921 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75921 ']' 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75921 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75921 00:15:37.175 killing process with pid 75921 00:15:37.175 Received shutdown signal, test time was about 60.000000 seconds 00:15:37.175 00:15:37.175 Latency(us) 00:15:37.175 [2024-11-04T14:41:36.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.175 [2024-11-04T14:41:36.298Z] =================================================================================================================== 00:15:37.175 [2024-11-04T14:41:36.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75921' 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75921 00:15:37.175 [2024-11-04 14:41:36.168265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.175 14:41:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75921 00:15:37.175 [2024-11-04 14:41:36.168430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.175 [2024-11-04 14:41:36.168495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.175 [2024-11-04 14:41:36.168514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:37.434 [2024-11-04 14:41:36.437494] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.369 14:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:38.369 00:15:38.369 real 0m26.627s 00:15:38.369 user 0m33.045s 00:15:38.369 sys 0m3.850s 00:15:38.369 14:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:38.369 ************************************ 00:15:38.369 END TEST raid_rebuild_test_sb 00:15:38.369 14:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.369 ************************************ 00:15:38.627 14:41:37 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:15:38.627 14:41:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:38.627 14:41:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:38.627 14:41:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.627 ************************************ 00:15:38.627 START TEST raid_rebuild_test_io 00:15:38.627 ************************************ 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:38.627 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76684 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76684 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76684 ']' 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:38.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:38.628 14:41:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.628 [2024-11-04 14:41:37.620307] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:15:38.628 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:38.628 Zero copy mechanism will not be used. 00:15:38.628 [2024-11-04 14:41:37.620766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76684 ] 00:15:38.886 [2024-11-04 14:41:37.803239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.886 [2024-11-04 14:41:37.930783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.147 [2024-11-04 14:41:38.133459] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.147 [2024-11-04 14:41:38.133552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.741 BaseBdev1_malloc 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.741 [2024-11-04 14:41:38.634581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:39.741 [2024-11-04 14:41:38.634707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.741 [2024-11-04 14:41:38.634744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:39.741 [2024-11-04 14:41:38.634764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.741 [2024-11-04 14:41:38.637689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.741 [2024-11-04 14:41:38.637893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.741 BaseBdev1 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.741 BaseBdev2_malloc 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.741 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.741 [2024-11-04 14:41:38.690573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:39.741 [2024-11-04 14:41:38.690666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.742 [2024-11-04 14:41:38.690694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:39.742 [2024-11-04 14:41:38.690714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.742 [2024-11-04 14:41:38.693446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.742 [2024-11-04 14:41:38.693494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:39.742 BaseBdev2 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.742 spare_malloc 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.742 spare_delay 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.742 [2024-11-04 14:41:38.763111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:39.742 [2024-11-04 14:41:38.763187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.742 [2024-11-04 14:41:38.763220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:39.742 [2024-11-04 14:41:38.763237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.742 [2024-11-04 14:41:38.766057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.742 [2024-11-04 14:41:38.766126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:39.742 spare 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.742 [2024-11-04 14:41:38.775235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.742 [2024-11-04 14:41:38.777680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.742 [2024-11-04 14:41:38.777820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:39.742 [2024-11-04 14:41:38.777843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:39.742 [2024-11-04 14:41:38.778233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:39.742 [2024-11-04 14:41:38.778450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:39.742 [2024-11-04 14:41:38.778475] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:39.742 [2024-11-04 14:41:38.778687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.742 "name": "raid_bdev1", 00:15:39.742 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:39.742 "strip_size_kb": 0, 00:15:39.742 "state": "online", 00:15:39.742 "raid_level": "raid1", 00:15:39.742 "superblock": false, 00:15:39.742 "num_base_bdevs": 2, 00:15:39.742 "num_base_bdevs_discovered": 2, 00:15:39.742 "num_base_bdevs_operational": 2, 00:15:39.742 "base_bdevs_list": [ 00:15:39.742 { 00:15:39.742 "name": "BaseBdev1", 00:15:39.742 "uuid": "e2118d9c-a0ae-5dcb-a332-ac4d488701b3", 00:15:39.742 "is_configured": true, 00:15:39.742 "data_offset": 0, 00:15:39.742 "data_size": 65536 00:15:39.742 }, 00:15:39.742 { 00:15:39.742 "name": "BaseBdev2", 00:15:39.742 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:39.742 "is_configured": true, 00:15:39.742 "data_offset": 0, 00:15:39.742 "data_size": 65536 00:15:39.742 } 00:15:39.742 ] 00:15:39.742 }' 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.742 14:41:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.308 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:40.308 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:40.308 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.308 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.308 [2024-11-04 14:41:39.299652] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.308 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.308 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.309 [2024-11-04 14:41:39.387314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.309 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.567 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.567 "name": "raid_bdev1", 00:15:40.567 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:40.567 "strip_size_kb": 0, 00:15:40.567 "state": "online", 00:15:40.567 "raid_level": "raid1", 00:15:40.567 "superblock": false, 00:15:40.567 "num_base_bdevs": 2, 00:15:40.567 "num_base_bdevs_discovered": 1, 00:15:40.567 "num_base_bdevs_operational": 1, 00:15:40.567 "base_bdevs_list": [ 00:15:40.567 { 00:15:40.567 "name": null, 00:15:40.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.567 "is_configured": false, 00:15:40.567 "data_offset": 0, 00:15:40.567 "data_size": 65536 00:15:40.567 }, 00:15:40.567 { 00:15:40.567 "name": "BaseBdev2", 00:15:40.567 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:40.567 "is_configured": true, 00:15:40.567 "data_offset": 0, 00:15:40.567 "data_size": 65536 00:15:40.567 } 00:15:40.567 ] 00:15:40.567 }' 00:15:40.567 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.567 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.567 [2024-11-04 14:41:39.515287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:40.567 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:40.567 Zero copy mechanism will not be used. 00:15:40.567 Running I/O for 60 seconds... 00:15:40.849 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.849 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.849 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:40.849 [2024-11-04 14:41:39.892798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.849 14:41:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.849 14:41:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:40.849 [2024-11-04 14:41:39.956204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:40.849 [2024-11-04 14:41:39.958695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:41.124 [2024-11-04 14:41:40.076546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:41.124 [2024-11-04 14:41:40.077250] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:41.124 [2024-11-04 14:41:40.204497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:41.124 [2024-11-04 14:41:40.204889] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:41.693 143.00 IOPS, 429.00 MiB/s [2024-11-04T14:41:40.816Z] [2024-11-04 14:41:40.550347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:41.693 [2024-11-04 14:41:40.689213] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:41.952 [2024-11-04 14:41:40.938968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:41.952 [2024-11-04 14:41:40.947137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.952 "name": "raid_bdev1", 00:15:41.952 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:41.952 "strip_size_kb": 0, 00:15:41.952 "state": "online", 00:15:41.952 "raid_level": "raid1", 00:15:41.952 "superblock": false, 00:15:41.952 "num_base_bdevs": 2, 00:15:41.952 "num_base_bdevs_discovered": 2, 00:15:41.952 "num_base_bdevs_operational": 2, 00:15:41.952 "process": { 00:15:41.952 "type": "rebuild", 00:15:41.952 "target": "spare", 00:15:41.952 "progress": { 00:15:41.952 "blocks": 14336, 00:15:41.952 "percent": 21 00:15:41.952 } 00:15:41.952 }, 00:15:41.952 "base_bdevs_list": [ 00:15:41.952 { 00:15:41.952 "name": "spare", 00:15:41.952 "uuid": "6dfa2a4b-2916-5cf3-ad1e-9612e9490970", 00:15:41.952 "is_configured": true, 00:15:41.952 "data_offset": 0, 00:15:41.952 "data_size": 65536 00:15:41.952 }, 00:15:41.952 { 00:15:41.952 "name": "BaseBdev2", 00:15:41.952 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:41.952 "is_configured": true, 00:15:41.952 "data_offset": 0, 00:15:41.952 "data_size": 65536 00:15:41.952 } 00:15:41.952 ] 00:15:41.952 }' 00:15:41.952 14:41:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.952 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.952 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.211 [2024-11-04 14:41:41.088388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.211 [2024-11-04 14:41:41.151498] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:42.211 [2024-11-04 14:41:41.151909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:42.211 [2024-11-04 14:41:41.261874] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.211 [2024-11-04 14:41:41.272439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.211 [2024-11-04 14:41:41.272512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.211 [2024-11-04 14:41:41.272535] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.211 [2024-11-04 14:41:41.300212] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.211 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.472 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.472 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.472 "name": "raid_bdev1", 00:15:42.472 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:42.472 "strip_size_kb": 0, 00:15:42.472 "state": "online", 00:15:42.472 "raid_level": "raid1", 00:15:42.472 "superblock": false, 00:15:42.472 "num_base_bdevs": 2, 00:15:42.472 "num_base_bdevs_discovered": 1, 00:15:42.472 "num_base_bdevs_operational": 1, 00:15:42.472 "base_bdevs_list": [ 00:15:42.472 { 00:15:42.472 "name": null, 00:15:42.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.472 "is_configured": false, 00:15:42.472 "data_offset": 0, 00:15:42.472 "data_size": 65536 00:15:42.472 }, 00:15:42.472 { 00:15:42.472 "name": "BaseBdev2", 00:15:42.472 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:42.472 "is_configured": true, 00:15:42.472 "data_offset": 0, 00:15:42.472 "data_size": 65536 00:15:42.472 } 00:15:42.472 ] 00:15:42.472 }' 00:15:42.472 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.472 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.732 128.50 IOPS, 385.50 MiB/s [2024-11-04T14:41:41.855Z] 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.732 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.732 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.732 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.732 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.732 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.732 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.732 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.732 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.991 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.991 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.991 "name": "raid_bdev1", 00:15:42.991 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:42.991 "strip_size_kb": 0, 00:15:42.991 "state": "online", 00:15:42.991 "raid_level": "raid1", 00:15:42.991 "superblock": false, 00:15:42.991 "num_base_bdevs": 2, 00:15:42.991 "num_base_bdevs_discovered": 1, 00:15:42.991 "num_base_bdevs_operational": 1, 00:15:42.991 "base_bdevs_list": [ 00:15:42.991 { 00:15:42.991 "name": null, 00:15:42.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.991 "is_configured": false, 00:15:42.991 "data_offset": 0, 00:15:42.991 "data_size": 65536 00:15:42.991 }, 00:15:42.991 { 00:15:42.991 "name": "BaseBdev2", 00:15:42.991 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:42.991 "is_configured": true, 00:15:42.991 "data_offset": 0, 00:15:42.991 "data_size": 65536 00:15:42.991 } 00:15:42.991 ] 00:15:42.991 }' 00:15:42.991 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.991 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.991 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.991 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:42.991 14:41:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:42.991 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.991 14:41:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:42.991 [2024-11-04 14:41:41.996457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:42.991 14:41:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.991 14:41:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:42.991 [2024-11-04 14:41:42.036475] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:42.991 [2024-11-04 14:41:42.039049] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:43.250 [2024-11-04 14:41:42.165381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:43.250 [2024-11-04 14:41:42.166121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:43.508 [2024-11-04 14:41:42.422395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:43.508 [2024-11-04 14:41:42.422786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:43.767 139.00 IOPS, 417.00 MiB/s [2024-11-04T14:41:42.890Z] [2024-11-04 14:41:42.754742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:43.767 [2024-11-04 14:41:42.874969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:43.767 [2024-11-04 14:41:42.875332] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.026 "name": "raid_bdev1", 00:15:44.026 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:44.026 "strip_size_kb": 0, 00:15:44.026 "state": "online", 00:15:44.026 "raid_level": "raid1", 00:15:44.026 "superblock": false, 00:15:44.026 "num_base_bdevs": 2, 00:15:44.026 "num_base_bdevs_discovered": 2, 00:15:44.026 "num_base_bdevs_operational": 2, 00:15:44.026 "process": { 00:15:44.026 "type": "rebuild", 00:15:44.026 "target": "spare", 00:15:44.026 "progress": { 00:15:44.026 "blocks": 12288, 00:15:44.026 "percent": 18 00:15:44.026 } 00:15:44.026 }, 00:15:44.026 "base_bdevs_list": [ 00:15:44.026 { 00:15:44.026 "name": "spare", 00:15:44.026 "uuid": "6dfa2a4b-2916-5cf3-ad1e-9612e9490970", 00:15:44.026 "is_configured": true, 00:15:44.026 "data_offset": 0, 00:15:44.026 "data_size": 65536 00:15:44.026 }, 00:15:44.026 { 00:15:44.026 "name": "BaseBdev2", 00:15:44.026 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:44.026 "is_configured": true, 00:15:44.026 "data_offset": 0, 00:15:44.026 "data_size": 65536 00:15:44.026 } 00:15:44.026 ] 00:15:44.026 }' 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.026 [2024-11-04 14:41:43.130840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.026 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=436 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.286 "name": "raid_bdev1", 00:15:44.286 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:44.286 "strip_size_kb": 0, 00:15:44.286 "state": "online", 00:15:44.286 "raid_level": "raid1", 00:15:44.286 "superblock": false, 00:15:44.286 "num_base_bdevs": 2, 00:15:44.286 "num_base_bdevs_discovered": 2, 00:15:44.286 "num_base_bdevs_operational": 2, 00:15:44.286 "process": { 00:15:44.286 "type": "rebuild", 00:15:44.286 "target": "spare", 00:15:44.286 "progress": { 00:15:44.286 "blocks": 14336, 00:15:44.286 "percent": 21 00:15:44.286 } 00:15:44.286 }, 00:15:44.286 "base_bdevs_list": [ 00:15:44.286 { 00:15:44.286 "name": "spare", 00:15:44.286 "uuid": "6dfa2a4b-2916-5cf3-ad1e-9612e9490970", 00:15:44.286 "is_configured": true, 00:15:44.286 "data_offset": 0, 00:15:44.286 "data_size": 65536 00:15:44.286 }, 00:15:44.286 { 00:15:44.286 "name": "BaseBdev2", 00:15:44.286 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:44.286 "is_configured": true, 00:15:44.286 "data_offset": 0, 00:15:44.286 "data_size": 65536 00:15:44.286 } 00:15:44.286 ] 00:15:44.286 }' 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.286 [2024-11-04 14:41:43.249777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.286 14:41:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.113 124.00 IOPS, 372.00 MiB/s [2024-11-04T14:41:44.236Z] [2024-11-04 14:41:43.944778] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:45.372 [2024-11-04 14:41:44.277774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.372 "name": "raid_bdev1", 00:15:45.372 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:45.372 "strip_size_kb": 0, 00:15:45.372 "state": "online", 00:15:45.372 "raid_level": "raid1", 00:15:45.372 "superblock": false, 00:15:45.372 "num_base_bdevs": 2, 00:15:45.372 "num_base_bdevs_discovered": 2, 00:15:45.372 "num_base_bdevs_operational": 2, 00:15:45.372 "process": { 00:15:45.372 "type": "rebuild", 00:15:45.372 "target": "spare", 00:15:45.372 "progress": { 00:15:45.372 "blocks": 32768, 00:15:45.372 "percent": 50 00:15:45.372 } 00:15:45.372 }, 00:15:45.372 "base_bdevs_list": [ 00:15:45.372 { 00:15:45.372 "name": "spare", 00:15:45.372 "uuid": "6dfa2a4b-2916-5cf3-ad1e-9612e9490970", 00:15:45.372 "is_configured": true, 00:15:45.372 "data_offset": 0, 00:15:45.372 "data_size": 65536 00:15:45.372 }, 00:15:45.372 { 00:15:45.372 "name": "BaseBdev2", 00:15:45.372 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:45.372 "is_configured": true, 00:15:45.372 "data_offset": 0, 00:15:45.372 "data_size": 65536 00:15:45.372 } 00:15:45.372 ] 00:15:45.372 }' 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.372 [2024-11-04 14:41:44.413454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:45.372 [2024-11-04 14:41:44.413812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.372 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.630 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.630 14:41:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.889 109.80 IOPS, 329.40 MiB/s [2024-11-04T14:41:45.012Z] [2024-11-04 14:41:44.856358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:46.147 [2024-11-04 14:41:45.068998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:46.405 [2024-11-04 14:41:45.288659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:46.405 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.405 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.405 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.405 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.405 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.405 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.405 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.405 14:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.405 14:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.405 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.664 14:41:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.664 99.50 IOPS, 298.50 MiB/s [2024-11-04T14:41:45.787Z] 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.664 "name": "raid_bdev1", 00:15:46.664 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:46.664 "strip_size_kb": 0, 00:15:46.664 "state": "online", 00:15:46.664 "raid_level": "raid1", 00:15:46.664 "superblock": false, 00:15:46.664 "num_base_bdevs": 2, 00:15:46.664 "num_base_bdevs_discovered": 2, 00:15:46.664 "num_base_bdevs_operational": 2, 00:15:46.664 "process": { 00:15:46.664 "type": "rebuild", 00:15:46.664 "target": "spare", 00:15:46.664 "progress": { 00:15:46.664 "blocks": 49152, 00:15:46.664 "percent": 75 00:15:46.664 } 00:15:46.664 }, 00:15:46.664 "base_bdevs_list": [ 00:15:46.664 { 00:15:46.664 "name": "spare", 00:15:46.664 "uuid": "6dfa2a4b-2916-5cf3-ad1e-9612e9490970", 00:15:46.664 "is_configured": true, 00:15:46.664 "data_offset": 0, 00:15:46.664 "data_size": 65536 00:15:46.664 }, 00:15:46.664 { 00:15:46.664 "name": "BaseBdev2", 00:15:46.664 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:46.664 "is_configured": true, 00:15:46.664 "data_offset": 0, 00:15:46.664 "data_size": 65536 00:15:46.664 } 00:15:46.664 ] 00:15:46.664 }' 00:15:46.664 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.664 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.664 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.664 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.664 14:41:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.923 [2024-11-04 14:41:45.961095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:47.182 [2024-11-04 14:41:46.191479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:47.440 [2024-11-04 14:41:46.530539] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:47.745 90.00 IOPS, 270.00 MiB/s [2024-11-04T14:41:46.868Z] [2024-11-04 14:41:46.630581] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:47.745 [2024-11-04 14:41:46.633102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.745 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.745 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.745 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.745 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.745 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.745 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.745 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.746 "name": "raid_bdev1", 00:15:47.746 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:47.746 "strip_size_kb": 0, 00:15:47.746 "state": "online", 00:15:47.746 "raid_level": "raid1", 00:15:47.746 "superblock": false, 00:15:47.746 "num_base_bdevs": 2, 00:15:47.746 "num_base_bdevs_discovered": 2, 00:15:47.746 "num_base_bdevs_operational": 2, 00:15:47.746 "base_bdevs_list": [ 00:15:47.746 { 00:15:47.746 "name": "spare", 00:15:47.746 "uuid": "6dfa2a4b-2916-5cf3-ad1e-9612e9490970", 00:15:47.746 "is_configured": true, 00:15:47.746 "data_offset": 0, 00:15:47.746 "data_size": 65536 00:15:47.746 }, 00:15:47.746 { 00:15:47.746 "name": "BaseBdev2", 00:15:47.746 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:47.746 "is_configured": true, 00:15:47.746 "data_offset": 0, 00:15:47.746 "data_size": 65536 00:15:47.746 } 00:15:47.746 ] 00:15:47.746 }' 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.746 14:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.012 14:41:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.012 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.012 "name": "raid_bdev1", 00:15:48.012 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:48.012 "strip_size_kb": 0, 00:15:48.012 "state": "online", 00:15:48.012 "raid_level": "raid1", 00:15:48.012 "superblock": false, 00:15:48.012 "num_base_bdevs": 2, 00:15:48.012 "num_base_bdevs_discovered": 2, 00:15:48.012 "num_base_bdevs_operational": 2, 00:15:48.012 "base_bdevs_list": [ 00:15:48.012 { 00:15:48.012 "name": "spare", 00:15:48.012 "uuid": "6dfa2a4b-2916-5cf3-ad1e-9612e9490970", 00:15:48.012 "is_configured": true, 00:15:48.012 "data_offset": 0, 00:15:48.012 "data_size": 65536 00:15:48.012 }, 00:15:48.012 { 00:15:48.012 "name": "BaseBdev2", 00:15:48.012 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:48.012 "is_configured": true, 00:15:48.012 "data_offset": 0, 00:15:48.012 "data_size": 65536 00:15:48.012 } 00:15:48.012 ] 00:15:48.012 }' 00:15:48.012 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.013 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.013 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.013 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.013 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:48.013 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.013 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.013 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.013 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.013 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:48.013 14:41:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.013 "name": "raid_bdev1", 00:15:48.013 "uuid": "345be091-2ad0-4ff5-8619-98ca1fef8193", 00:15:48.013 "strip_size_kb": 0, 00:15:48.013 "state": "online", 00:15:48.013 "raid_level": "raid1", 00:15:48.013 "superblock": false, 00:15:48.013 "num_base_bdevs": 2, 00:15:48.013 "num_base_bdevs_discovered": 2, 00:15:48.013 "num_base_bdevs_operational": 2, 00:15:48.013 "base_bdevs_list": [ 00:15:48.013 { 00:15:48.013 "name": "spare", 00:15:48.013 "uuid": "6dfa2a4b-2916-5cf3-ad1e-9612e9490970", 00:15:48.013 "is_configured": true, 00:15:48.013 "data_offset": 0, 00:15:48.013 "data_size": 65536 00:15:48.013 }, 00:15:48.013 { 00:15:48.013 "name": "BaseBdev2", 00:15:48.013 "uuid": "be99a8df-e50c-5483-8b16-0e1c17beba32", 00:15:48.013 "is_configured": true, 00:15:48.013 "data_offset": 0, 00:15:48.013 "data_size": 65536 00:15:48.013 } 00:15:48.013 ] 00:15:48.013 }' 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.013 14:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.580 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:48.580 14:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.580 14:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.580 [2024-11-04 14:41:47.546808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.580 [2024-11-04 14:41:47.547038] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.580 82.50 IOPS, 247.50 MiB/s 00:15:48.580 Latency(us) 00:15:48.580 [2024-11-04T14:41:47.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.580 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:48.580 raid_bdev1 : 8.13 81.44 244.32 0.00 0.00 15152.01 286.72 120586.24 00:15:48.580 [2024-11-04T14:41:47.703Z] =================================================================================================================== 00:15:48.580 [2024-11-04T14:41:47.703Z] Total : 81.44 244.32 0.00 0.00 15152.01 286.72 120586.24 00:15:48.580 [2024-11-04 14:41:47.666819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.580 [2024-11-04 14:41:47.666881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.580 [2024-11-04 14:41:47.667019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.580 [2024-11-04 14:41:47.667038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:48.580 { 00:15:48.580 "results": [ 00:15:48.580 { 00:15:48.580 "job": "raid_bdev1", 00:15:48.580 "core_mask": "0x1", 00:15:48.580 "workload": "randrw", 00:15:48.580 "percentage": 50, 00:15:48.580 "status": "finished", 00:15:48.580 "queue_depth": 2, 00:15:48.580 "io_size": 3145728, 00:15:48.580 "runtime": 8.128762, 00:15:48.580 "iops": 81.43921546725073, 00:15:48.580 "mibps": 244.3176464017522, 00:15:48.580 "io_failed": 0, 00:15:48.580 "io_timeout": 0, 00:15:48.580 "avg_latency_us": 15152.014940950288, 00:15:48.580 "min_latency_us": 286.72, 00:15:48.580 "max_latency_us": 120586.24 00:15:48.580 } 00:15:48.580 ], 00:15:48.580 "core_count": 1 00:15:48.580 } 00:15:48.580 14:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.580 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:48.580 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.580 14:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.580 14:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.580 14:41:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:48.839 14:41:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:49.097 /dev/nbd0 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.097 1+0 records in 00:15:49.097 1+0 records out 00:15:49.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527241 s, 7.8 MB/s 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:49.097 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:49.098 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:49.357 /dev/nbd1 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.357 1+0 records in 00:15:49.357 1+0 records out 00:15:49.357 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432943 s, 9.5 MB/s 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:49.357 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:49.616 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:49.616 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.616 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:49.616 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:49.616 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:49.616 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.616 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.890 14:41:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76684 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76684 ']' 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76684 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76684 00:15:50.150 killing process with pid 76684 00:15:50.150 Received shutdown signal, test time was about 9.695672 seconds 00:15:50.150 00:15:50.150 Latency(us) 00:15:50.150 [2024-11-04T14:41:49.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.150 [2024-11-04T14:41:49.273Z] =================================================================================================================== 00:15:50.150 [2024-11-04T14:41:49.273Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76684' 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76684 00:15:50.150 [2024-11-04 14:41:49.213611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.150 14:41:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76684 00:15:50.409 [2024-11-04 14:41:49.423806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:51.792 ************************************ 00:15:51.792 END TEST raid_rebuild_test_io 00:15:51.792 ************************************ 00:15:51.792 14:41:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:51.792 00:15:51.792 real 0m12.990s 00:15:51.792 user 0m17.093s 00:15:51.792 sys 0m1.340s 00:15:51.792 14:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:51.792 14:41:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.792 14:41:50 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:15:51.792 14:41:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:51.792 14:41:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:51.792 14:41:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:51.792 ************************************ 00:15:51.792 START TEST raid_rebuild_test_sb_io 00:15:51.792 ************************************ 00:15:51.792 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:15:51.792 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:51.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77068 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77068 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77068 ']' 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:51.793 14:41:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:51.793 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:51.793 Zero copy mechanism will not be used. 00:15:51.793 [2024-11-04 14:41:50.670919] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:15:51.793 [2024-11-04 14:41:50.671119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77068 ] 00:15:51.793 [2024-11-04 14:41:50.868121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.050 [2024-11-04 14:41:50.997323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.307 [2024-11-04 14:41:51.199324] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.307 [2024-11-04 14:41:51.199397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.872 BaseBdev1_malloc 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.872 [2024-11-04 14:41:51.776709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:52.872 [2024-11-04 14:41:51.777228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.872 [2024-11-04 14:41:51.777273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:52.872 [2024-11-04 14:41:51.777294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.872 [2024-11-04 14:41:51.780049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.872 [2024-11-04 14:41:51.780100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:52.872 BaseBdev1 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.872 BaseBdev2_malloc 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.872 [2024-11-04 14:41:51.827995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:52.872 [2024-11-04 14:41:51.828071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.872 [2024-11-04 14:41:51.828097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:52.872 [2024-11-04 14:41:51.828117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.872 [2024-11-04 14:41:51.830773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.872 [2024-11-04 14:41:51.830823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:52.872 BaseBdev2 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.872 spare_malloc 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.872 spare_delay 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.872 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.872 [2024-11-04 14:41:51.901183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:52.872 [2024-11-04 14:41:51.901258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.873 [2024-11-04 14:41:51.901290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:52.873 [2024-11-04 14:41:51.901309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.873 [2024-11-04 14:41:51.904053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.873 [2024-11-04 14:41:51.904102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:52.873 spare 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.873 [2024-11-04 14:41:51.909277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.873 [2024-11-04 14:41:51.911637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:52.873 [2024-11-04 14:41:51.911855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:52.873 [2024-11-04 14:41:51.911881] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:52.873 [2024-11-04 14:41:51.912230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:52.873 [2024-11-04 14:41:51.912462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:52.873 [2024-11-04 14:41:51.912479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:52.873 [2024-11-04 14:41:51.912659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.873 "name": "raid_bdev1", 00:15:52.873 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:15:52.873 "strip_size_kb": 0, 00:15:52.873 "state": "online", 00:15:52.873 "raid_level": "raid1", 00:15:52.873 "superblock": true, 00:15:52.873 "num_base_bdevs": 2, 00:15:52.873 "num_base_bdevs_discovered": 2, 00:15:52.873 "num_base_bdevs_operational": 2, 00:15:52.873 "base_bdevs_list": [ 00:15:52.873 { 00:15:52.873 "name": "BaseBdev1", 00:15:52.873 "uuid": "ed6cb662-9039-5fe9-873e-35a588c68f25", 00:15:52.873 "is_configured": true, 00:15:52.873 "data_offset": 2048, 00:15:52.873 "data_size": 63488 00:15:52.873 }, 00:15:52.873 { 00:15:52.873 "name": "BaseBdev2", 00:15:52.873 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:15:52.873 "is_configured": true, 00:15:52.873 "data_offset": 2048, 00:15:52.873 "data_size": 63488 00:15:52.873 } 00:15:52.873 ] 00:15:52.873 }' 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.873 14:41:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.439 [2024-11-04 14:41:52.461751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.439 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.697 [2024-11-04 14:41:52.565443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.697 "name": "raid_bdev1", 00:15:53.697 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:15:53.697 "strip_size_kb": 0, 00:15:53.697 "state": "online", 00:15:53.697 "raid_level": "raid1", 00:15:53.697 "superblock": true, 00:15:53.697 "num_base_bdevs": 2, 00:15:53.697 "num_base_bdevs_discovered": 1, 00:15:53.697 "num_base_bdevs_operational": 1, 00:15:53.697 "base_bdevs_list": [ 00:15:53.697 { 00:15:53.697 "name": null, 00:15:53.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.697 "is_configured": false, 00:15:53.697 "data_offset": 0, 00:15:53.697 "data_size": 63488 00:15:53.697 }, 00:15:53.697 { 00:15:53.697 "name": "BaseBdev2", 00:15:53.697 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:15:53.697 "is_configured": true, 00:15:53.697 "data_offset": 2048, 00:15:53.697 "data_size": 63488 00:15:53.697 } 00:15:53.697 ] 00:15:53.697 }' 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.697 14:41:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.697 [2024-11-04 14:41:52.693316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:53.697 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:53.697 Zero copy mechanism will not be used. 00:15:53.697 Running I/O for 60 seconds... 00:15:54.264 14:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:54.264 14:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.264 14:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.264 [2024-11-04 14:41:53.137197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:54.264 14:41:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.264 14:41:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:54.264 [2024-11-04 14:41:53.206939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:54.264 [2024-11-04 14:41:53.209643] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.522 [2024-11-04 14:41:53.462814] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:54.522 [2024-11-04 14:41:53.463442] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:54.780 171.00 IOPS, 513.00 MiB/s [2024-11-04T14:41:53.903Z] [2024-11-04 14:41:53.828338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:55.039 [2024-11-04 14:41:54.055208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:55.039 [2024-11-04 14:41:54.055872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.297 "name": "raid_bdev1", 00:15:55.297 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:15:55.297 "strip_size_kb": 0, 00:15:55.297 "state": "online", 00:15:55.297 "raid_level": "raid1", 00:15:55.297 "superblock": true, 00:15:55.297 "num_base_bdevs": 2, 00:15:55.297 "num_base_bdevs_discovered": 2, 00:15:55.297 "num_base_bdevs_operational": 2, 00:15:55.297 "process": { 00:15:55.297 "type": "rebuild", 00:15:55.297 "target": "spare", 00:15:55.297 "progress": { 00:15:55.297 "blocks": 10240, 00:15:55.297 "percent": 16 00:15:55.297 } 00:15:55.297 }, 00:15:55.297 "base_bdevs_list": [ 00:15:55.297 { 00:15:55.297 "name": "spare", 00:15:55.297 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:15:55.297 "is_configured": true, 00:15:55.297 "data_offset": 2048, 00:15:55.297 "data_size": 63488 00:15:55.297 }, 00:15:55.297 { 00:15:55.297 "name": "BaseBdev2", 00:15:55.297 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:15:55.297 "is_configured": true, 00:15:55.297 "data_offset": 2048, 00:15:55.297 "data_size": 63488 00:15:55.297 } 00:15:55.297 ] 00:15:55.297 }' 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.297 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.298 [2024-11-04 14:41:54.344301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.298 [2024-11-04 14:41:54.404337] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:55.556 [2024-11-04 14:41:54.514153] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:55.557 [2024-11-04 14:41:54.524763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.557 [2024-11-04 14:41:54.524810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:55.557 [2024-11-04 14:41:54.524831] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:55.557 [2024-11-04 14:41:54.567747] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.557 "name": "raid_bdev1", 00:15:55.557 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:15:55.557 "strip_size_kb": 0, 00:15:55.557 "state": "online", 00:15:55.557 "raid_level": "raid1", 00:15:55.557 "superblock": true, 00:15:55.557 "num_base_bdevs": 2, 00:15:55.557 "num_base_bdevs_discovered": 1, 00:15:55.557 "num_base_bdevs_operational": 1, 00:15:55.557 "base_bdevs_list": [ 00:15:55.557 { 00:15:55.557 "name": null, 00:15:55.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.557 "is_configured": false, 00:15:55.557 "data_offset": 0, 00:15:55.557 "data_size": 63488 00:15:55.557 }, 00:15:55.557 { 00:15:55.557 "name": "BaseBdev2", 00:15:55.557 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:15:55.557 "is_configured": true, 00:15:55.557 "data_offset": 2048, 00:15:55.557 "data_size": 63488 00:15:55.557 } 00:15:55.557 ] 00:15:55.557 }' 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.557 14:41:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.073 120.00 IOPS, 360.00 MiB/s [2024-11-04T14:41:55.196Z] 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.073 "name": "raid_bdev1", 00:15:56.073 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:15:56.073 "strip_size_kb": 0, 00:15:56.073 "state": "online", 00:15:56.073 "raid_level": "raid1", 00:15:56.073 "superblock": true, 00:15:56.073 "num_base_bdevs": 2, 00:15:56.073 "num_base_bdevs_discovered": 1, 00:15:56.073 "num_base_bdevs_operational": 1, 00:15:56.073 "base_bdevs_list": [ 00:15:56.073 { 00:15:56.073 "name": null, 00:15:56.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.073 "is_configured": false, 00:15:56.073 "data_offset": 0, 00:15:56.073 "data_size": 63488 00:15:56.073 }, 00:15:56.073 { 00:15:56.073 "name": "BaseBdev2", 00:15:56.073 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:15:56.073 "is_configured": true, 00:15:56.073 "data_offset": 2048, 00:15:56.073 "data_size": 63488 00:15:56.073 } 00:15:56.073 ] 00:15:56.073 }' 00:15:56.073 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.379 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:56.379 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.379 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:56.379 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:56.379 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.379 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:56.379 [2024-11-04 14:41:55.272446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:56.379 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.379 14:41:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:56.379 [2024-11-04 14:41:55.358530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:56.379 [2024-11-04 14:41:55.361072] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.657 [2024-11-04 14:41:55.486173] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:56.657 [2024-11-04 14:41:55.486866] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:56.657 147.67 IOPS, 443.00 MiB/s [2024-11-04T14:41:55.780Z] [2024-11-04 14:41:55.727652] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:56.657 [2024-11-04 14:41:55.728024] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:56.915 [2024-11-04 14:41:55.975831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:57.173 [2024-11-04 14:41:56.194984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:57.173 [2024-11-04 14:41:56.195446] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:57.431 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.431 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.431 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.431 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.431 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.431 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.431 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.432 "name": "raid_bdev1", 00:15:57.432 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:15:57.432 "strip_size_kb": 0, 00:15:57.432 "state": "online", 00:15:57.432 "raid_level": "raid1", 00:15:57.432 "superblock": true, 00:15:57.432 "num_base_bdevs": 2, 00:15:57.432 "num_base_bdevs_discovered": 2, 00:15:57.432 "num_base_bdevs_operational": 2, 00:15:57.432 "process": { 00:15:57.432 "type": "rebuild", 00:15:57.432 "target": "spare", 00:15:57.432 "progress": { 00:15:57.432 "blocks": 10240, 00:15:57.432 "percent": 16 00:15:57.432 } 00:15:57.432 }, 00:15:57.432 "base_bdevs_list": [ 00:15:57.432 { 00:15:57.432 "name": "spare", 00:15:57.432 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:15:57.432 "is_configured": true, 00:15:57.432 "data_offset": 2048, 00:15:57.432 "data_size": 63488 00:15:57.432 }, 00:15:57.432 { 00:15:57.432 "name": "BaseBdev2", 00:15:57.432 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:15:57.432 "is_configured": true, 00:15:57.432 "data_offset": 2048, 00:15:57.432 "data_size": 63488 00:15:57.432 } 00:15:57.432 ] 00:15:57.432 }' 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:57.432 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=449 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.432 "name": "raid_bdev1", 00:15:57.432 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:15:57.432 "strip_size_kb": 0, 00:15:57.432 "state": "online", 00:15:57.432 "raid_level": "raid1", 00:15:57.432 "superblock": true, 00:15:57.432 "num_base_bdevs": 2, 00:15:57.432 "num_base_bdevs_discovered": 2, 00:15:57.432 "num_base_bdevs_operational": 2, 00:15:57.432 "process": { 00:15:57.432 "type": "rebuild", 00:15:57.432 "target": "spare", 00:15:57.432 "progress": { 00:15:57.432 "blocks": 12288, 00:15:57.432 "percent": 19 00:15:57.432 } 00:15:57.432 }, 00:15:57.432 "base_bdevs_list": [ 00:15:57.432 { 00:15:57.432 "name": "spare", 00:15:57.432 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:15:57.432 "is_configured": true, 00:15:57.432 "data_offset": 2048, 00:15:57.432 "data_size": 63488 00:15:57.432 }, 00:15:57.432 { 00:15:57.432 "name": "BaseBdev2", 00:15:57.432 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:15:57.432 "is_configured": true, 00:15:57.432 "data_offset": 2048, 00:15:57.432 "data_size": 63488 00:15:57.432 } 00:15:57.432 ] 00:15:57.432 }' 00:15:57.432 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.690 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.691 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.691 [2024-11-04 14:41:56.637500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:57.691 [2024-11-04 14:41:56.638040] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:57.691 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.691 14:41:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.949 128.50 IOPS, 385.50 MiB/s [2024-11-04T14:41:57.072Z] [2024-11-04 14:41:56.960654] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:57.949 [2024-11-04 14:41:56.961294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:58.206 [2024-11-04 14:41:57.096333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:58.465 [2024-11-04 14:41:57.436991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.726 [2024-11-04 14:41:57.665027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.726 116.80 IOPS, 350.40 MiB/s [2024-11-04T14:41:57.849Z] 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.726 "name": "raid_bdev1", 00:15:58.726 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:15:58.726 "strip_size_kb": 0, 00:15:58.726 "state": "online", 00:15:58.726 "raid_level": "raid1", 00:15:58.726 "superblock": true, 00:15:58.726 "num_base_bdevs": 2, 00:15:58.726 "num_base_bdevs_discovered": 2, 00:15:58.726 "num_base_bdevs_operational": 2, 00:15:58.726 "process": { 00:15:58.726 "type": "rebuild", 00:15:58.726 "target": "spare", 00:15:58.726 "progress": { 00:15:58.726 "blocks": 26624, 00:15:58.726 "percent": 41 00:15:58.726 } 00:15:58.726 }, 00:15:58.726 "base_bdevs_list": [ 00:15:58.726 { 00:15:58.726 "name": "spare", 00:15:58.726 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:15:58.726 "is_configured": true, 00:15:58.726 "data_offset": 2048, 00:15:58.726 "data_size": 63488 00:15:58.726 }, 00:15:58.726 { 00:15:58.726 "name": "BaseBdev2", 00:15:58.726 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:15:58.726 "is_configured": true, 00:15:58.726 "data_offset": 2048, 00:15:58.726 "data_size": 63488 00:15:58.726 } 00:15:58.726 ] 00:15:58.726 }' 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.726 14:41:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.984 [2024-11-04 14:41:58.030837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:58.984 [2024-11-04 14:41:58.031544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:59.270 [2024-11-04 14:41:58.242838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:59.270 [2024-11-04 14:41:58.243275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:59.836 103.83 IOPS, 311.50 MiB/s [2024-11-04T14:41:58.959Z] 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.836 [2024-11-04 14:41:58.839729] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.836 "name": "raid_bdev1", 00:15:59.836 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:15:59.836 "strip_size_kb": 0, 00:15:59.836 "state": "online", 00:15:59.836 "raid_level": "raid1", 00:15:59.836 "superblock": true, 00:15:59.836 "num_base_bdevs": 2, 00:15:59.836 "num_base_bdevs_discovered": 2, 00:15:59.836 "num_base_bdevs_operational": 2, 00:15:59.836 "process": { 00:15:59.836 "type": "rebuild", 00:15:59.836 "target": "spare", 00:15:59.836 "progress": { 00:15:59.836 "blocks": 43008, 00:15:59.836 "percent": 67 00:15:59.836 } 00:15:59.836 }, 00:15:59.836 "base_bdevs_list": [ 00:15:59.836 { 00:15:59.836 "name": "spare", 00:15:59.836 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:15:59.836 "is_configured": true, 00:15:59.836 "data_offset": 2048, 00:15:59.836 "data_size": 63488 00:15:59.836 }, 00:15:59.836 { 00:15:59.836 "name": "BaseBdev2", 00:15:59.836 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:15:59.836 "is_configured": true, 00:15:59.836 "data_offset": 2048, 00:15:59.836 "data_size": 63488 00:15:59.836 } 00:15:59.836 ] 00:15:59.836 }' 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.836 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.836 [2024-11-04 14:41:58.949021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:00.094 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.094 14:41:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.352 [2024-11-04 14:41:59.254790] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:00.352 [2024-11-04 14:41:59.357890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:00.906 94.71 IOPS, 284.14 MiB/s [2024-11-04T14:42:00.029Z] [2024-11-04 14:41:59.925137] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:00.907 14:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.907 14:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.907 14:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.907 14:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.907 14:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.907 14:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.907 14:41:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.907 14:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.907 14:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:00.907 14:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.907 14:42:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.907 [2024-11-04 14:42:00.025164] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:00.907 [2024-11-04 14:42:00.027852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.165 14:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.165 "name": "raid_bdev1", 00:16:01.165 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:01.165 "strip_size_kb": 0, 00:16:01.165 "state": "online", 00:16:01.165 "raid_level": "raid1", 00:16:01.165 "superblock": true, 00:16:01.165 "num_base_bdevs": 2, 00:16:01.165 "num_base_bdevs_discovered": 2, 00:16:01.165 "num_base_bdevs_operational": 2, 00:16:01.165 "process": { 00:16:01.165 "type": "rebuild", 00:16:01.165 "target": "spare", 00:16:01.165 "progress": { 00:16:01.165 "blocks": 63488, 00:16:01.165 "percent": 100 00:16:01.165 } 00:16:01.165 }, 00:16:01.165 "base_bdevs_list": [ 00:16:01.165 { 00:16:01.165 "name": "spare", 00:16:01.165 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:16:01.165 "is_configured": true, 00:16:01.165 "data_offset": 2048, 00:16:01.165 "data_size": 63488 00:16:01.165 }, 00:16:01.165 { 00:16:01.165 "name": "BaseBdev2", 00:16:01.165 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:01.165 "is_configured": true, 00:16:01.165 "data_offset": 2048, 00:16:01.165 "data_size": 63488 00:16:01.165 } 00:16:01.165 ] 00:16:01.165 }' 00:16:01.165 14:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.165 14:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.165 14:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.165 14:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.165 14:42:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.297 86.50 IOPS, 259.50 MiB/s [2024-11-04T14:42:01.420Z] 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.297 "name": "raid_bdev1", 00:16:02.297 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:02.297 "strip_size_kb": 0, 00:16:02.297 "state": "online", 00:16:02.297 "raid_level": "raid1", 00:16:02.297 "superblock": true, 00:16:02.297 "num_base_bdevs": 2, 00:16:02.297 "num_base_bdevs_discovered": 2, 00:16:02.297 "num_base_bdevs_operational": 2, 00:16:02.297 "base_bdevs_list": [ 00:16:02.297 { 00:16:02.297 "name": "spare", 00:16:02.297 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:16:02.297 "is_configured": true, 00:16:02.297 "data_offset": 2048, 00:16:02.297 "data_size": 63488 00:16:02.297 }, 00:16:02.297 { 00:16:02.297 "name": "BaseBdev2", 00:16:02.297 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:02.297 "is_configured": true, 00:16:02.297 "data_offset": 2048, 00:16:02.297 "data_size": 63488 00:16:02.297 } 00:16:02.297 ] 00:16:02.297 }' 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.297 "name": "raid_bdev1", 00:16:02.297 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:02.297 "strip_size_kb": 0, 00:16:02.297 "state": "online", 00:16:02.297 "raid_level": "raid1", 00:16:02.297 "superblock": true, 00:16:02.297 "num_base_bdevs": 2, 00:16:02.297 "num_base_bdevs_discovered": 2, 00:16:02.297 "num_base_bdevs_operational": 2, 00:16:02.297 "base_bdevs_list": [ 00:16:02.297 { 00:16:02.297 "name": "spare", 00:16:02.297 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:16:02.297 "is_configured": true, 00:16:02.297 "data_offset": 2048, 00:16:02.297 "data_size": 63488 00:16:02.297 }, 00:16:02.297 { 00:16:02.297 "name": "BaseBdev2", 00:16:02.297 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:02.297 "is_configured": true, 00:16:02.297 "data_offset": 2048, 00:16:02.297 "data_size": 63488 00:16:02.297 } 00:16:02.297 ] 00:16:02.297 }' 00:16:02.297 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.555 "name": "raid_bdev1", 00:16:02.555 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:02.555 "strip_size_kb": 0, 00:16:02.555 "state": "online", 00:16:02.555 "raid_level": "raid1", 00:16:02.555 "superblock": true, 00:16:02.555 "num_base_bdevs": 2, 00:16:02.555 "num_base_bdevs_discovered": 2, 00:16:02.555 "num_base_bdevs_operational": 2, 00:16:02.555 "base_bdevs_list": [ 00:16:02.555 { 00:16:02.555 "name": "spare", 00:16:02.555 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:16:02.555 "is_configured": true, 00:16:02.555 "data_offset": 2048, 00:16:02.555 "data_size": 63488 00:16:02.555 }, 00:16:02.555 { 00:16:02.555 "name": "BaseBdev2", 00:16:02.555 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:02.555 "is_configured": true, 00:16:02.555 "data_offset": 2048, 00:16:02.555 "data_size": 63488 00:16:02.555 } 00:16:02.555 ] 00:16:02.555 }' 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.555 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.071 81.78 IOPS, 245.33 MiB/s [2024-11-04T14:42:02.194Z] 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:03.071 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.071 14:42:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.071 [2024-11-04 14:42:01.992652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:03.071 [2024-11-04 14:42:01.992687] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.071 00:16:03.071 Latency(us) 00:16:03.071 [2024-11-04T14:42:02.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.071 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:03.071 raid_bdev1 : 9.31 80.00 240.00 0.00 0.00 16111.23 286.72 118679.74 00:16:03.071 [2024-11-04T14:42:02.194Z] =================================================================================================================== 00:16:03.071 [2024-11-04T14:42:02.194Z] Total : 80.00 240.00 0.00 0.00 16111.23 286.72 118679.74 00:16:03.071 [2024-11-04 14:42:02.028512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.071 [2024-11-04 14:42:02.028574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.071 [2024-11-04 14:42:02.028692] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:03.071 [2024-11-04 14:42:02.028710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:03.071 { 00:16:03.071 "results": [ 00:16:03.071 { 00:16:03.071 "job": "raid_bdev1", 00:16:03.071 "core_mask": "0x1", 00:16:03.071 "workload": "randrw", 00:16:03.071 "percentage": 50, 00:16:03.071 "status": "finished", 00:16:03.071 "queue_depth": 2, 00:16:03.071 "io_size": 3145728, 00:16:03.071 "runtime": 9.312455, 00:16:03.071 "iops": 80.00038657904923, 00:16:03.071 "mibps": 240.0011597371477, 00:16:03.071 "io_failed": 0, 00:16:03.071 "io_timeout": 0, 00:16:03.071 "avg_latency_us": 16111.231140939595, 00:16:03.071 "min_latency_us": 286.72, 00:16:03.071 "max_latency_us": 118679.73818181817 00:16:03.071 } 00:16:03.071 ], 00:16:03.071 "core_count": 1 00:16:03.071 } 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.071 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:03.329 /dev/nbd0 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.329 1+0 records in 00:16:03.329 1+0 records out 00:16:03.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529681 s, 7.7 MB/s 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.329 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:16:03.586 /dev/nbd1 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.847 1+0 records in 00:16:03.847 1+0 records out 00:16:03.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366482 s, 11.2 MB/s 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.847 14:42:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.105 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:04.396 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.396 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.396 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.396 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.396 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.396 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.655 [2024-11-04 14:42:03.534485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:04.655 [2024-11-04 14:42:03.534549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.655 [2024-11-04 14:42:03.534582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:04.655 [2024-11-04 14:42:03.534598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.655 [2024-11-04 14:42:03.537467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.655 [2024-11-04 14:42:03.537513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:04.655 [2024-11-04 14:42:03.537632] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:04.655 [2024-11-04 14:42:03.537693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.655 [2024-11-04 14:42:03.537882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.655 spare 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.655 [2024-11-04 14:42:03.638055] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:04.655 [2024-11-04 14:42:03.638379] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:04.655 [2024-11-04 14:42:03.638833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:16:04.655 [2024-11-04 14:42:03.639117] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:04.655 [2024-11-04 14:42:03.639135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:04.655 [2024-11-04 14:42:03.639389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.655 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.655 "name": "raid_bdev1", 00:16:04.655 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:04.655 "strip_size_kb": 0, 00:16:04.655 "state": "online", 00:16:04.655 "raid_level": "raid1", 00:16:04.655 "superblock": true, 00:16:04.655 "num_base_bdevs": 2, 00:16:04.655 "num_base_bdevs_discovered": 2, 00:16:04.656 "num_base_bdevs_operational": 2, 00:16:04.656 "base_bdevs_list": [ 00:16:04.656 { 00:16:04.656 "name": "spare", 00:16:04.656 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:16:04.656 "is_configured": true, 00:16:04.656 "data_offset": 2048, 00:16:04.656 "data_size": 63488 00:16:04.656 }, 00:16:04.656 { 00:16:04.656 "name": "BaseBdev2", 00:16:04.656 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:04.656 "is_configured": true, 00:16:04.656 "data_offset": 2048, 00:16:04.656 "data_size": 63488 00:16:04.656 } 00:16:04.656 ] 00:16:04.656 }' 00:16:04.656 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.656 14:42:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.223 "name": "raid_bdev1", 00:16:05.223 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:05.223 "strip_size_kb": 0, 00:16:05.223 "state": "online", 00:16:05.223 "raid_level": "raid1", 00:16:05.223 "superblock": true, 00:16:05.223 "num_base_bdevs": 2, 00:16:05.223 "num_base_bdevs_discovered": 2, 00:16:05.223 "num_base_bdevs_operational": 2, 00:16:05.223 "base_bdevs_list": [ 00:16:05.223 { 00:16:05.223 "name": "spare", 00:16:05.223 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:16:05.223 "is_configured": true, 00:16:05.223 "data_offset": 2048, 00:16:05.223 "data_size": 63488 00:16:05.223 }, 00:16:05.223 { 00:16:05.223 "name": "BaseBdev2", 00:16:05.223 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:05.223 "is_configured": true, 00:16:05.223 "data_offset": 2048, 00:16:05.223 "data_size": 63488 00:16:05.223 } 00:16:05.223 ] 00:16:05.223 }' 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:05.223 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.483 [2024-11-04 14:42:04.375622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.483 "name": "raid_bdev1", 00:16:05.483 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:05.483 "strip_size_kb": 0, 00:16:05.483 "state": "online", 00:16:05.483 "raid_level": "raid1", 00:16:05.483 "superblock": true, 00:16:05.483 "num_base_bdevs": 2, 00:16:05.483 "num_base_bdevs_discovered": 1, 00:16:05.483 "num_base_bdevs_operational": 1, 00:16:05.483 "base_bdevs_list": [ 00:16:05.483 { 00:16:05.483 "name": null, 00:16:05.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.483 "is_configured": false, 00:16:05.483 "data_offset": 0, 00:16:05.483 "data_size": 63488 00:16:05.483 }, 00:16:05.483 { 00:16:05.483 "name": "BaseBdev2", 00:16:05.483 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:05.483 "is_configured": true, 00:16:05.483 "data_offset": 2048, 00:16:05.483 "data_size": 63488 00:16:05.483 } 00:16:05.483 ] 00:16:05.483 }' 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.483 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.052 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:06.052 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.052 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.052 [2024-11-04 14:42:04.963897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.052 [2024-11-04 14:42:04.964287] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:06.052 [2024-11-04 14:42:04.964466] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:06.052 [2024-11-04 14:42:04.964668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.052 [2024-11-04 14:42:04.980652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:16:06.052 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.052 14:42:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:06.052 [2024-11-04 14:42:04.983385] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.988 14:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.988 14:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.988 14:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.988 14:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.988 14:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.988 14:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.988 14:42:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.988 14:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.988 14:42:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.988 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.988 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.988 "name": "raid_bdev1", 00:16:06.988 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:06.988 "strip_size_kb": 0, 00:16:06.988 "state": "online", 00:16:06.988 "raid_level": "raid1", 00:16:06.988 "superblock": true, 00:16:06.988 "num_base_bdevs": 2, 00:16:06.988 "num_base_bdevs_discovered": 2, 00:16:06.988 "num_base_bdevs_operational": 2, 00:16:06.988 "process": { 00:16:06.988 "type": "rebuild", 00:16:06.988 "target": "spare", 00:16:06.988 "progress": { 00:16:06.988 "blocks": 20480, 00:16:06.988 "percent": 32 00:16:06.988 } 00:16:06.988 }, 00:16:06.988 "base_bdevs_list": [ 00:16:06.988 { 00:16:06.988 "name": "spare", 00:16:06.988 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:16:06.988 "is_configured": true, 00:16:06.988 "data_offset": 2048, 00:16:06.988 "data_size": 63488 00:16:06.988 }, 00:16:06.988 { 00:16:06.988 "name": "BaseBdev2", 00:16:06.988 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:06.988 "is_configured": true, 00:16:06.988 "data_offset": 2048, 00:16:06.988 "data_size": 63488 00:16:06.988 } 00:16:06.988 ] 00:16:06.988 }' 00:16:06.988 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.988 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.988 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.246 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.247 [2024-11-04 14:42:06.157297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.247 [2024-11-04 14:42:06.192538] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.247 [2024-11-04 14:42:06.192667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.247 [2024-11-04 14:42:06.192691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.247 [2024-11-04 14:42:06.192706] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.247 "name": "raid_bdev1", 00:16:07.247 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:07.247 "strip_size_kb": 0, 00:16:07.247 "state": "online", 00:16:07.247 "raid_level": "raid1", 00:16:07.247 "superblock": true, 00:16:07.247 "num_base_bdevs": 2, 00:16:07.247 "num_base_bdevs_discovered": 1, 00:16:07.247 "num_base_bdevs_operational": 1, 00:16:07.247 "base_bdevs_list": [ 00:16:07.247 { 00:16:07.247 "name": null, 00:16:07.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.247 "is_configured": false, 00:16:07.247 "data_offset": 0, 00:16:07.247 "data_size": 63488 00:16:07.247 }, 00:16:07.247 { 00:16:07.247 "name": "BaseBdev2", 00:16:07.247 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:07.247 "is_configured": true, 00:16:07.247 "data_offset": 2048, 00:16:07.247 "data_size": 63488 00:16:07.247 } 00:16:07.247 ] 00:16:07.247 }' 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.247 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.825 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:07.825 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.825 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.825 [2024-11-04 14:42:06.739620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:07.825 [2024-11-04 14:42:06.739720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.825 [2024-11-04 14:42:06.739753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:07.825 [2024-11-04 14:42:06.739772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.825 [2024-11-04 14:42:06.740439] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.825 [2024-11-04 14:42:06.740497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:07.825 [2024-11-04 14:42:06.740616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:07.825 [2024-11-04 14:42:06.740641] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:07.825 [2024-11-04 14:42:06.740655] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:07.825 [2024-11-04 14:42:06.740690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.825 [2024-11-04 14:42:06.756536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:16:07.825 spare 00:16:07.825 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.825 14:42:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:07.825 [2024-11-04 14:42:06.759115] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.760 "name": "raid_bdev1", 00:16:08.760 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:08.760 "strip_size_kb": 0, 00:16:08.760 "state": "online", 00:16:08.760 "raid_level": "raid1", 00:16:08.760 "superblock": true, 00:16:08.760 "num_base_bdevs": 2, 00:16:08.760 "num_base_bdevs_discovered": 2, 00:16:08.760 "num_base_bdevs_operational": 2, 00:16:08.760 "process": { 00:16:08.760 "type": "rebuild", 00:16:08.760 "target": "spare", 00:16:08.760 "progress": { 00:16:08.760 "blocks": 20480, 00:16:08.760 "percent": 32 00:16:08.760 } 00:16:08.760 }, 00:16:08.760 "base_bdevs_list": [ 00:16:08.760 { 00:16:08.760 "name": "spare", 00:16:08.760 "uuid": "dba6b175-e79f-5bac-bfa2-f4d4904cd3f9", 00:16:08.760 "is_configured": true, 00:16:08.760 "data_offset": 2048, 00:16:08.760 "data_size": 63488 00:16:08.760 }, 00:16:08.760 { 00:16:08.760 "name": "BaseBdev2", 00:16:08.760 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:08.760 "is_configured": true, 00:16:08.760 "data_offset": 2048, 00:16:08.760 "data_size": 63488 00:16:08.760 } 00:16:08.760 ] 00:16:08.760 }' 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.760 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.019 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.019 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:09.019 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.019 14:42:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.019 [2024-11-04 14:42:07.917059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.019 [2024-11-04 14:42:07.968269] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:09.019 [2024-11-04 14:42:07.968608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.019 [2024-11-04 14:42:07.968646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:09.019 [2024-11-04 14:42:07.968659] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.019 "name": "raid_bdev1", 00:16:09.019 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:09.019 "strip_size_kb": 0, 00:16:09.019 "state": "online", 00:16:09.019 "raid_level": "raid1", 00:16:09.019 "superblock": true, 00:16:09.019 "num_base_bdevs": 2, 00:16:09.019 "num_base_bdevs_discovered": 1, 00:16:09.019 "num_base_bdevs_operational": 1, 00:16:09.019 "base_bdevs_list": [ 00:16:09.019 { 00:16:09.019 "name": null, 00:16:09.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.019 "is_configured": false, 00:16:09.019 "data_offset": 0, 00:16:09.019 "data_size": 63488 00:16:09.019 }, 00:16:09.019 { 00:16:09.019 "name": "BaseBdev2", 00:16:09.019 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:09.019 "is_configured": true, 00:16:09.019 "data_offset": 2048, 00:16:09.019 "data_size": 63488 00:16:09.019 } 00:16:09.019 ] 00:16:09.019 }' 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.019 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.585 "name": "raid_bdev1", 00:16:09.585 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:09.585 "strip_size_kb": 0, 00:16:09.585 "state": "online", 00:16:09.585 "raid_level": "raid1", 00:16:09.585 "superblock": true, 00:16:09.585 "num_base_bdevs": 2, 00:16:09.585 "num_base_bdevs_discovered": 1, 00:16:09.585 "num_base_bdevs_operational": 1, 00:16:09.585 "base_bdevs_list": [ 00:16:09.585 { 00:16:09.585 "name": null, 00:16:09.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.585 "is_configured": false, 00:16:09.585 "data_offset": 0, 00:16:09.585 "data_size": 63488 00:16:09.585 }, 00:16:09.585 { 00:16:09.585 "name": "BaseBdev2", 00:16:09.585 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:09.585 "is_configured": true, 00:16:09.585 "data_offset": 2048, 00:16:09.585 "data_size": 63488 00:16:09.585 } 00:16:09.585 ] 00:16:09.585 }' 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:09.585 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:09.586 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.586 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.586 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.586 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:09.586 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.586 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.586 [2024-11-04 14:42:08.611485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:09.586 [2024-11-04 14:42:08.611568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.586 [2024-11-04 14:42:08.611601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:09.586 [2024-11-04 14:42:08.611617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.586 [2024-11-04 14:42:08.612173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.586 [2024-11-04 14:42:08.612213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.586 [2024-11-04 14:42:08.612316] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:09.586 [2024-11-04 14:42:08.612348] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:09.586 [2024-11-04 14:42:08.612362] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:09.586 [2024-11-04 14:42:08.612375] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:09.586 BaseBdev1 00:16:09.586 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.586 14:42:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.521 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.778 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.778 "name": "raid_bdev1", 00:16:10.778 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:10.778 "strip_size_kb": 0, 00:16:10.778 "state": "online", 00:16:10.778 "raid_level": "raid1", 00:16:10.778 "superblock": true, 00:16:10.778 "num_base_bdevs": 2, 00:16:10.778 "num_base_bdevs_discovered": 1, 00:16:10.778 "num_base_bdevs_operational": 1, 00:16:10.778 "base_bdevs_list": [ 00:16:10.778 { 00:16:10.778 "name": null, 00:16:10.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.779 "is_configured": false, 00:16:10.779 "data_offset": 0, 00:16:10.779 "data_size": 63488 00:16:10.779 }, 00:16:10.779 { 00:16:10.779 "name": "BaseBdev2", 00:16:10.779 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:10.779 "is_configured": true, 00:16:10.779 "data_offset": 2048, 00:16:10.779 "data_size": 63488 00:16:10.779 } 00:16:10.779 ] 00:16:10.779 }' 00:16:10.779 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.779 14:42:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.036 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.036 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.036 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.036 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.036 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.036 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.036 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.036 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.036 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.036 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.307 "name": "raid_bdev1", 00:16:11.307 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:11.307 "strip_size_kb": 0, 00:16:11.307 "state": "online", 00:16:11.307 "raid_level": "raid1", 00:16:11.307 "superblock": true, 00:16:11.307 "num_base_bdevs": 2, 00:16:11.307 "num_base_bdevs_discovered": 1, 00:16:11.307 "num_base_bdevs_operational": 1, 00:16:11.307 "base_bdevs_list": [ 00:16:11.307 { 00:16:11.307 "name": null, 00:16:11.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.307 "is_configured": false, 00:16:11.307 "data_offset": 0, 00:16:11.307 "data_size": 63488 00:16:11.307 }, 00:16:11.307 { 00:16:11.307 "name": "BaseBdev2", 00:16:11.307 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:11.307 "is_configured": true, 00:16:11.307 "data_offset": 2048, 00:16:11.307 "data_size": 63488 00:16:11.307 } 00:16:11.307 ] 00:16:11.307 }' 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.307 [2024-11-04 14:42:10.280288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.307 [2024-11-04 14:42:10.280619] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.307 [2024-11-04 14:42:10.280655] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:11.307 request: 00:16:11.307 { 00:16:11.307 "base_bdev": "BaseBdev1", 00:16:11.307 "raid_bdev": "raid_bdev1", 00:16:11.307 "method": "bdev_raid_add_base_bdev", 00:16:11.307 "req_id": 1 00:16:11.307 } 00:16:11.307 Got JSON-RPC error response 00:16:11.307 response: 00:16:11.307 { 00:16:11.307 "code": -22, 00:16:11.307 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:11.307 } 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:11.307 14:42:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:12.272 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.272 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.272 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.273 "name": "raid_bdev1", 00:16:12.273 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:12.273 "strip_size_kb": 0, 00:16:12.273 "state": "online", 00:16:12.273 "raid_level": "raid1", 00:16:12.273 "superblock": true, 00:16:12.273 "num_base_bdevs": 2, 00:16:12.273 "num_base_bdevs_discovered": 1, 00:16:12.273 "num_base_bdevs_operational": 1, 00:16:12.273 "base_bdevs_list": [ 00:16:12.273 { 00:16:12.273 "name": null, 00:16:12.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.273 "is_configured": false, 00:16:12.273 "data_offset": 0, 00:16:12.273 "data_size": 63488 00:16:12.273 }, 00:16:12.273 { 00:16:12.273 "name": "BaseBdev2", 00:16:12.273 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:12.273 "is_configured": true, 00:16:12.273 "data_offset": 2048, 00:16:12.273 "data_size": 63488 00:16:12.273 } 00:16:12.273 ] 00:16:12.273 }' 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.273 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.840 "name": "raid_bdev1", 00:16:12.840 "uuid": "77b0c814-2cbb-4948-bf93-b3b5f027f21f", 00:16:12.840 "strip_size_kb": 0, 00:16:12.840 "state": "online", 00:16:12.840 "raid_level": "raid1", 00:16:12.840 "superblock": true, 00:16:12.840 "num_base_bdevs": 2, 00:16:12.840 "num_base_bdevs_discovered": 1, 00:16:12.840 "num_base_bdevs_operational": 1, 00:16:12.840 "base_bdevs_list": [ 00:16:12.840 { 00:16:12.840 "name": null, 00:16:12.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.840 "is_configured": false, 00:16:12.840 "data_offset": 0, 00:16:12.840 "data_size": 63488 00:16:12.840 }, 00:16:12.840 { 00:16:12.840 "name": "BaseBdev2", 00:16:12.840 "uuid": "e5585f6c-715d-592f-95f0-f99bbf6a688e", 00:16:12.840 "is_configured": true, 00:16:12.840 "data_offset": 2048, 00:16:12.840 "data_size": 63488 00:16:12.840 } 00:16:12.840 ] 00:16:12.840 }' 00:16:12.840 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.099 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.099 14:42:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77068 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77068 ']' 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77068 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77068 00:16:13.099 killing process with pid 77068 00:16:13.099 Received shutdown signal, test time was about 19.361036 seconds 00:16:13.099 00:16:13.099 Latency(us) 00:16:13.099 [2024-11-04T14:42:12.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:13.099 [2024-11-04T14:42:12.222Z] =================================================================================================================== 00:16:13.099 [2024-11-04T14:42:12.222Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77068' 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77068 00:16:13.099 [2024-11-04 14:42:12.057142] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.099 14:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77068 00:16:13.099 [2024-11-04 14:42:12.057307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.099 [2024-11-04 14:42:12.057377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.100 [2024-11-04 14:42:12.057400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:13.358 [2024-11-04 14:42:12.265762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:14.294 14:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:14.294 00:16:14.294 real 0m22.784s 00:16:14.294 user 0m30.844s 00:16:14.294 sys 0m1.984s 00:16:14.294 ************************************ 00:16:14.294 END TEST raid_rebuild_test_sb_io 00:16:14.294 ************************************ 00:16:14.294 14:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.294 14:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.294 14:42:13 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:14.294 14:42:13 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:16:14.294 14:42:13 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:14.294 14:42:13 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.294 14:42:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.294 ************************************ 00:16:14.294 START TEST raid_rebuild_test 00:16:14.294 ************************************ 00:16:14.294 14:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:16:14.294 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:14.294 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:14.294 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:14.294 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:14.294 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:14.294 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77788 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77788 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77788 ']' 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:14.295 14:42:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.554 [2024-11-04 14:42:13.552762] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:16:14.554 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:14.554 Zero copy mechanism will not be used. 00:16:14.554 [2024-11-04 14:42:13.553748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77788 ] 00:16:14.813 [2024-11-04 14:42:13.741302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.813 [2024-11-04 14:42:13.898580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.071 [2024-11-04 14:42:14.152240] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.071 [2024-11-04 14:42:14.152290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:15.355 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:15.355 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:16:15.355 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.355 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:15.355 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.355 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 BaseBdev1_malloc 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 [2024-11-04 14:42:14.507118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:15.615 [2024-11-04 14:42:14.507206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.615 [2024-11-04 14:42:14.507241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:15.615 [2024-11-04 14:42:14.507261] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.615 [2024-11-04 14:42:14.510177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.615 [2024-11-04 14:42:14.510232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.615 BaseBdev1 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 BaseBdev2_malloc 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 [2024-11-04 14:42:14.560098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:15.615 [2024-11-04 14:42:14.560185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.615 [2024-11-04 14:42:14.560215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:15.615 [2024-11-04 14:42:14.560234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.615 [2024-11-04 14:42:14.562982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.615 [2024-11-04 14:42:14.563027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:15.615 BaseBdev2 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 BaseBdev3_malloc 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 [2024-11-04 14:42:14.622699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:15.615 [2024-11-04 14:42:14.622917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.615 [2024-11-04 14:42:14.622981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:15.615 [2024-11-04 14:42:14.623003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.615 [2024-11-04 14:42:14.625740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.615 [2024-11-04 14:42:14.625794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:15.615 BaseBdev3 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.615 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.615 BaseBdev4_malloc 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.616 [2024-11-04 14:42:14.678794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:15.616 [2024-11-04 14:42:14.678871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.616 [2024-11-04 14:42:14.678909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:15.616 [2024-11-04 14:42:14.678947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.616 [2024-11-04 14:42:14.681700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.616 [2024-11-04 14:42:14.681877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:15.616 BaseBdev4 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.616 spare_malloc 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.616 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.875 spare_delay 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.875 [2024-11-04 14:42:14.738819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:15.875 [2024-11-04 14:42:14.738899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.875 [2024-11-04 14:42:14.738960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:15.875 [2024-11-04 14:42:14.738982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.875 [2024-11-04 14:42:14.741716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.875 [2024-11-04 14:42:14.741770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:15.875 spare 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.875 [2024-11-04 14:42:14.746876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.875 [2024-11-04 14:42:14.749275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:15.875 [2024-11-04 14:42:14.749374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.875 [2024-11-04 14:42:14.749459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:15.875 [2024-11-04 14:42:14.749577] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:15.875 [2024-11-04 14:42:14.749601] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:15.875 [2024-11-04 14:42:14.749961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:15.875 [2024-11-04 14:42:14.750205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:15.875 [2024-11-04 14:42:14.750225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:15.875 [2024-11-04 14:42:14.750441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.875 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.876 "name": "raid_bdev1", 00:16:15.876 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:15.876 "strip_size_kb": 0, 00:16:15.876 "state": "online", 00:16:15.876 "raid_level": "raid1", 00:16:15.876 "superblock": false, 00:16:15.876 "num_base_bdevs": 4, 00:16:15.876 "num_base_bdevs_discovered": 4, 00:16:15.876 "num_base_bdevs_operational": 4, 00:16:15.876 "base_bdevs_list": [ 00:16:15.876 { 00:16:15.876 "name": "BaseBdev1", 00:16:15.876 "uuid": "c8184ff1-3e59-5858-b0bb-bc80a36ed0ff", 00:16:15.876 "is_configured": true, 00:16:15.876 "data_offset": 0, 00:16:15.876 "data_size": 65536 00:16:15.876 }, 00:16:15.876 { 00:16:15.876 "name": "BaseBdev2", 00:16:15.876 "uuid": "c4722b5a-701f-5e31-95d4-212187c81a3e", 00:16:15.876 "is_configured": true, 00:16:15.876 "data_offset": 0, 00:16:15.876 "data_size": 65536 00:16:15.876 }, 00:16:15.876 { 00:16:15.876 "name": "BaseBdev3", 00:16:15.876 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:15.876 "is_configured": true, 00:16:15.876 "data_offset": 0, 00:16:15.876 "data_size": 65536 00:16:15.876 }, 00:16:15.876 { 00:16:15.876 "name": "BaseBdev4", 00:16:15.876 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:15.876 "is_configured": true, 00:16:15.876 "data_offset": 0, 00:16:15.876 "data_size": 65536 00:16:15.876 } 00:16:15.876 ] 00:16:15.876 }' 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.876 14:42:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.443 [2024-11-04 14:42:15.259441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.443 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:16.701 [2024-11-04 14:42:15.647186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:16.702 /dev/nbd0 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:16.702 1+0 records in 00:16:16.702 1+0 records out 00:16:16.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530727 s, 7.7 MB/s 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:16.702 14:42:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:24.835 65536+0 records in 00:16:24.835 65536+0 records out 00:16:24.835 33554432 bytes (34 MB, 32 MiB) copied, 8.09303 s, 4.1 MB/s 00:16:24.835 14:42:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:24.835 14:42:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.835 14:42:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:24.835 14:42:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.835 14:42:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:24.835 14:42:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.835 14:42:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:25.094 [2024-11-04 14:42:24.096700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.094 [2024-11-04 14:42:24.128816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.094 "name": "raid_bdev1", 00:16:25.094 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:25.094 "strip_size_kb": 0, 00:16:25.094 "state": "online", 00:16:25.094 "raid_level": "raid1", 00:16:25.094 "superblock": false, 00:16:25.094 "num_base_bdevs": 4, 00:16:25.094 "num_base_bdevs_discovered": 3, 00:16:25.094 "num_base_bdevs_operational": 3, 00:16:25.094 "base_bdevs_list": [ 00:16:25.094 { 00:16:25.094 "name": null, 00:16:25.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.094 "is_configured": false, 00:16:25.094 "data_offset": 0, 00:16:25.094 "data_size": 65536 00:16:25.094 }, 00:16:25.094 { 00:16:25.094 "name": "BaseBdev2", 00:16:25.094 "uuid": "c4722b5a-701f-5e31-95d4-212187c81a3e", 00:16:25.094 "is_configured": true, 00:16:25.094 "data_offset": 0, 00:16:25.094 "data_size": 65536 00:16:25.094 }, 00:16:25.094 { 00:16:25.094 "name": "BaseBdev3", 00:16:25.094 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:25.094 "is_configured": true, 00:16:25.094 "data_offset": 0, 00:16:25.094 "data_size": 65536 00:16:25.094 }, 00:16:25.094 { 00:16:25.094 "name": "BaseBdev4", 00:16:25.094 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:25.094 "is_configured": true, 00:16:25.094 "data_offset": 0, 00:16:25.094 "data_size": 65536 00:16:25.094 } 00:16:25.094 ] 00:16:25.094 }' 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.094 14:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.691 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:25.691 14:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.691 14:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.691 [2024-11-04 14:42:24.624917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:25.691 [2024-11-04 14:42:24.639112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:16:25.691 14:42:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.691 14:42:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:25.691 [2024-11-04 14:42:24.641856] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.630 "name": "raid_bdev1", 00:16:26.630 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:26.630 "strip_size_kb": 0, 00:16:26.630 "state": "online", 00:16:26.630 "raid_level": "raid1", 00:16:26.630 "superblock": false, 00:16:26.630 "num_base_bdevs": 4, 00:16:26.630 "num_base_bdevs_discovered": 4, 00:16:26.630 "num_base_bdevs_operational": 4, 00:16:26.630 "process": { 00:16:26.630 "type": "rebuild", 00:16:26.630 "target": "spare", 00:16:26.630 "progress": { 00:16:26.630 "blocks": 20480, 00:16:26.630 "percent": 31 00:16:26.630 } 00:16:26.630 }, 00:16:26.630 "base_bdevs_list": [ 00:16:26.630 { 00:16:26.630 "name": "spare", 00:16:26.630 "uuid": "fe5bcaad-14ba-53a2-a656-83e88f044607", 00:16:26.630 "is_configured": true, 00:16:26.630 "data_offset": 0, 00:16:26.630 "data_size": 65536 00:16:26.630 }, 00:16:26.630 { 00:16:26.630 "name": "BaseBdev2", 00:16:26.630 "uuid": "c4722b5a-701f-5e31-95d4-212187c81a3e", 00:16:26.630 "is_configured": true, 00:16:26.630 "data_offset": 0, 00:16:26.630 "data_size": 65536 00:16:26.630 }, 00:16:26.630 { 00:16:26.630 "name": "BaseBdev3", 00:16:26.630 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:26.630 "is_configured": true, 00:16:26.630 "data_offset": 0, 00:16:26.630 "data_size": 65536 00:16:26.630 }, 00:16:26.630 { 00:16:26.630 "name": "BaseBdev4", 00:16:26.630 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:26.630 "is_configured": true, 00:16:26.630 "data_offset": 0, 00:16:26.630 "data_size": 65536 00:16:26.630 } 00:16:26.630 ] 00:16:26.630 }' 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.630 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.889 [2024-11-04 14:42:25.798861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.889 [2024-11-04 14:42:25.850817] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:26.889 [2024-11-04 14:42:25.850953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.889 [2024-11-04 14:42:25.850984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:26.889 [2024-11-04 14:42:25.851001] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.889 "name": "raid_bdev1", 00:16:26.889 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:26.889 "strip_size_kb": 0, 00:16:26.889 "state": "online", 00:16:26.889 "raid_level": "raid1", 00:16:26.889 "superblock": false, 00:16:26.889 "num_base_bdevs": 4, 00:16:26.889 "num_base_bdevs_discovered": 3, 00:16:26.889 "num_base_bdevs_operational": 3, 00:16:26.889 "base_bdevs_list": [ 00:16:26.889 { 00:16:26.889 "name": null, 00:16:26.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.889 "is_configured": false, 00:16:26.889 "data_offset": 0, 00:16:26.889 "data_size": 65536 00:16:26.889 }, 00:16:26.889 { 00:16:26.889 "name": "BaseBdev2", 00:16:26.889 "uuid": "c4722b5a-701f-5e31-95d4-212187c81a3e", 00:16:26.889 "is_configured": true, 00:16:26.889 "data_offset": 0, 00:16:26.889 "data_size": 65536 00:16:26.889 }, 00:16:26.889 { 00:16:26.889 "name": "BaseBdev3", 00:16:26.889 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:26.889 "is_configured": true, 00:16:26.889 "data_offset": 0, 00:16:26.889 "data_size": 65536 00:16:26.889 }, 00:16:26.889 { 00:16:26.889 "name": "BaseBdev4", 00:16:26.889 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:26.889 "is_configured": true, 00:16:26.889 "data_offset": 0, 00:16:26.889 "data_size": 65536 00:16:26.889 } 00:16:26.889 ] 00:16:26.889 }' 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.889 14:42:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.455 "name": "raid_bdev1", 00:16:27.455 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:27.455 "strip_size_kb": 0, 00:16:27.455 "state": "online", 00:16:27.455 "raid_level": "raid1", 00:16:27.455 "superblock": false, 00:16:27.455 "num_base_bdevs": 4, 00:16:27.455 "num_base_bdevs_discovered": 3, 00:16:27.455 "num_base_bdevs_operational": 3, 00:16:27.455 "base_bdevs_list": [ 00:16:27.455 { 00:16:27.455 "name": null, 00:16:27.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.455 "is_configured": false, 00:16:27.455 "data_offset": 0, 00:16:27.455 "data_size": 65536 00:16:27.455 }, 00:16:27.455 { 00:16:27.455 "name": "BaseBdev2", 00:16:27.455 "uuid": "c4722b5a-701f-5e31-95d4-212187c81a3e", 00:16:27.455 "is_configured": true, 00:16:27.455 "data_offset": 0, 00:16:27.455 "data_size": 65536 00:16:27.455 }, 00:16:27.455 { 00:16:27.455 "name": "BaseBdev3", 00:16:27.455 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:27.455 "is_configured": true, 00:16:27.455 "data_offset": 0, 00:16:27.455 "data_size": 65536 00:16:27.455 }, 00:16:27.455 { 00:16:27.455 "name": "BaseBdev4", 00:16:27.455 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:27.455 "is_configured": true, 00:16:27.455 "data_offset": 0, 00:16:27.455 "data_size": 65536 00:16:27.455 } 00:16:27.455 ] 00:16:27.455 }' 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.455 [2024-11-04 14:42:26.526959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.455 [2024-11-04 14:42:26.540350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.455 14:42:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:27.455 [2024-11-04 14:42:26.542986] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.828 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.828 "name": "raid_bdev1", 00:16:28.828 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:28.828 "strip_size_kb": 0, 00:16:28.828 "state": "online", 00:16:28.828 "raid_level": "raid1", 00:16:28.828 "superblock": false, 00:16:28.828 "num_base_bdevs": 4, 00:16:28.828 "num_base_bdevs_discovered": 4, 00:16:28.828 "num_base_bdevs_operational": 4, 00:16:28.828 "process": { 00:16:28.828 "type": "rebuild", 00:16:28.828 "target": "spare", 00:16:28.828 "progress": { 00:16:28.828 "blocks": 20480, 00:16:28.828 "percent": 31 00:16:28.828 } 00:16:28.828 }, 00:16:28.828 "base_bdevs_list": [ 00:16:28.828 { 00:16:28.828 "name": "spare", 00:16:28.828 "uuid": "fe5bcaad-14ba-53a2-a656-83e88f044607", 00:16:28.828 "is_configured": true, 00:16:28.828 "data_offset": 0, 00:16:28.828 "data_size": 65536 00:16:28.828 }, 00:16:28.828 { 00:16:28.828 "name": "BaseBdev2", 00:16:28.828 "uuid": "c4722b5a-701f-5e31-95d4-212187c81a3e", 00:16:28.828 "is_configured": true, 00:16:28.828 "data_offset": 0, 00:16:28.828 "data_size": 65536 00:16:28.828 }, 00:16:28.828 { 00:16:28.828 "name": "BaseBdev3", 00:16:28.828 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:28.828 "is_configured": true, 00:16:28.829 "data_offset": 0, 00:16:28.829 "data_size": 65536 00:16:28.829 }, 00:16:28.829 { 00:16:28.829 "name": "BaseBdev4", 00:16:28.829 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:28.829 "is_configured": true, 00:16:28.829 "data_offset": 0, 00:16:28.829 "data_size": 65536 00:16:28.829 } 00:16:28.829 ] 00:16:28.829 }' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.829 [2024-11-04 14:42:27.699920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.829 [2024-11-04 14:42:27.751949] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.829 "name": "raid_bdev1", 00:16:28.829 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:28.829 "strip_size_kb": 0, 00:16:28.829 "state": "online", 00:16:28.829 "raid_level": "raid1", 00:16:28.829 "superblock": false, 00:16:28.829 "num_base_bdevs": 4, 00:16:28.829 "num_base_bdevs_discovered": 3, 00:16:28.829 "num_base_bdevs_operational": 3, 00:16:28.829 "process": { 00:16:28.829 "type": "rebuild", 00:16:28.829 "target": "spare", 00:16:28.829 "progress": { 00:16:28.829 "blocks": 24576, 00:16:28.829 "percent": 37 00:16:28.829 } 00:16:28.829 }, 00:16:28.829 "base_bdevs_list": [ 00:16:28.829 { 00:16:28.829 "name": "spare", 00:16:28.829 "uuid": "fe5bcaad-14ba-53a2-a656-83e88f044607", 00:16:28.829 "is_configured": true, 00:16:28.829 "data_offset": 0, 00:16:28.829 "data_size": 65536 00:16:28.829 }, 00:16:28.829 { 00:16:28.829 "name": null, 00:16:28.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.829 "is_configured": false, 00:16:28.829 "data_offset": 0, 00:16:28.829 "data_size": 65536 00:16:28.829 }, 00:16:28.829 { 00:16:28.829 "name": "BaseBdev3", 00:16:28.829 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:28.829 "is_configured": true, 00:16:28.829 "data_offset": 0, 00:16:28.829 "data_size": 65536 00:16:28.829 }, 00:16:28.829 { 00:16:28.829 "name": "BaseBdev4", 00:16:28.829 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:28.829 "is_configured": true, 00:16:28.829 "data_offset": 0, 00:16:28.829 "data_size": 65536 00:16:28.829 } 00:16:28.829 ] 00:16:28.829 }' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=480 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.829 14:42:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.087 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.087 "name": "raid_bdev1", 00:16:29.087 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:29.087 "strip_size_kb": 0, 00:16:29.087 "state": "online", 00:16:29.087 "raid_level": "raid1", 00:16:29.087 "superblock": false, 00:16:29.087 "num_base_bdevs": 4, 00:16:29.087 "num_base_bdevs_discovered": 3, 00:16:29.087 "num_base_bdevs_operational": 3, 00:16:29.087 "process": { 00:16:29.087 "type": "rebuild", 00:16:29.087 "target": "spare", 00:16:29.087 "progress": { 00:16:29.087 "blocks": 26624, 00:16:29.087 "percent": 40 00:16:29.087 } 00:16:29.087 }, 00:16:29.087 "base_bdevs_list": [ 00:16:29.087 { 00:16:29.087 "name": "spare", 00:16:29.087 "uuid": "fe5bcaad-14ba-53a2-a656-83e88f044607", 00:16:29.087 "is_configured": true, 00:16:29.087 "data_offset": 0, 00:16:29.087 "data_size": 65536 00:16:29.087 }, 00:16:29.087 { 00:16:29.087 "name": null, 00:16:29.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.087 "is_configured": false, 00:16:29.087 "data_offset": 0, 00:16:29.087 "data_size": 65536 00:16:29.087 }, 00:16:29.087 { 00:16:29.087 "name": "BaseBdev3", 00:16:29.087 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:29.087 "is_configured": true, 00:16:29.087 "data_offset": 0, 00:16:29.087 "data_size": 65536 00:16:29.087 }, 00:16:29.087 { 00:16:29.087 "name": "BaseBdev4", 00:16:29.087 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:29.087 "is_configured": true, 00:16:29.087 "data_offset": 0, 00:16:29.087 "data_size": 65536 00:16:29.087 } 00:16:29.087 ] 00:16:29.087 }' 00:16:29.087 14:42:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.087 14:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.087 14:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.087 14:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.087 14:42:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.020 "name": "raid_bdev1", 00:16:30.020 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:30.020 "strip_size_kb": 0, 00:16:30.020 "state": "online", 00:16:30.020 "raid_level": "raid1", 00:16:30.020 "superblock": false, 00:16:30.020 "num_base_bdevs": 4, 00:16:30.020 "num_base_bdevs_discovered": 3, 00:16:30.020 "num_base_bdevs_operational": 3, 00:16:30.020 "process": { 00:16:30.020 "type": "rebuild", 00:16:30.020 "target": "spare", 00:16:30.020 "progress": { 00:16:30.020 "blocks": 51200, 00:16:30.020 "percent": 78 00:16:30.020 } 00:16:30.020 }, 00:16:30.020 "base_bdevs_list": [ 00:16:30.020 { 00:16:30.020 "name": "spare", 00:16:30.020 "uuid": "fe5bcaad-14ba-53a2-a656-83e88f044607", 00:16:30.020 "is_configured": true, 00:16:30.020 "data_offset": 0, 00:16:30.020 "data_size": 65536 00:16:30.020 }, 00:16:30.020 { 00:16:30.020 "name": null, 00:16:30.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.020 "is_configured": false, 00:16:30.020 "data_offset": 0, 00:16:30.020 "data_size": 65536 00:16:30.020 }, 00:16:30.020 { 00:16:30.020 "name": "BaseBdev3", 00:16:30.020 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:30.020 "is_configured": true, 00:16:30.020 "data_offset": 0, 00:16:30.020 "data_size": 65536 00:16:30.020 }, 00:16:30.020 { 00:16:30.020 "name": "BaseBdev4", 00:16:30.020 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:30.020 "is_configured": true, 00:16:30.020 "data_offset": 0, 00:16:30.020 "data_size": 65536 00:16:30.020 } 00:16:30.020 ] 00:16:30.020 }' 00:16:30.020 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.277 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.277 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.277 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.277 14:42:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.842 [2024-11-04 14:42:29.766780] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:30.842 [2024-11-04 14:42:29.766882] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:30.842 [2024-11-04 14:42:29.767013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.408 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.408 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.408 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.409 "name": "raid_bdev1", 00:16:31.409 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:31.409 "strip_size_kb": 0, 00:16:31.409 "state": "online", 00:16:31.409 "raid_level": "raid1", 00:16:31.409 "superblock": false, 00:16:31.409 "num_base_bdevs": 4, 00:16:31.409 "num_base_bdevs_discovered": 3, 00:16:31.409 "num_base_bdevs_operational": 3, 00:16:31.409 "base_bdevs_list": [ 00:16:31.409 { 00:16:31.409 "name": "spare", 00:16:31.409 "uuid": "fe5bcaad-14ba-53a2-a656-83e88f044607", 00:16:31.409 "is_configured": true, 00:16:31.409 "data_offset": 0, 00:16:31.409 "data_size": 65536 00:16:31.409 }, 00:16:31.409 { 00:16:31.409 "name": null, 00:16:31.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.409 "is_configured": false, 00:16:31.409 "data_offset": 0, 00:16:31.409 "data_size": 65536 00:16:31.409 }, 00:16:31.409 { 00:16:31.409 "name": "BaseBdev3", 00:16:31.409 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:31.409 "is_configured": true, 00:16:31.409 "data_offset": 0, 00:16:31.409 "data_size": 65536 00:16:31.409 }, 00:16:31.409 { 00:16:31.409 "name": "BaseBdev4", 00:16:31.409 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:31.409 "is_configured": true, 00:16:31.409 "data_offset": 0, 00:16:31.409 "data_size": 65536 00:16:31.409 } 00:16:31.409 ] 00:16:31.409 }' 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.409 "name": "raid_bdev1", 00:16:31.409 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:31.409 "strip_size_kb": 0, 00:16:31.409 "state": "online", 00:16:31.409 "raid_level": "raid1", 00:16:31.409 "superblock": false, 00:16:31.409 "num_base_bdevs": 4, 00:16:31.409 "num_base_bdevs_discovered": 3, 00:16:31.409 "num_base_bdevs_operational": 3, 00:16:31.409 "base_bdevs_list": [ 00:16:31.409 { 00:16:31.409 "name": "spare", 00:16:31.409 "uuid": "fe5bcaad-14ba-53a2-a656-83e88f044607", 00:16:31.409 "is_configured": true, 00:16:31.409 "data_offset": 0, 00:16:31.409 "data_size": 65536 00:16:31.409 }, 00:16:31.409 { 00:16:31.409 "name": null, 00:16:31.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.409 "is_configured": false, 00:16:31.409 "data_offset": 0, 00:16:31.409 "data_size": 65536 00:16:31.409 }, 00:16:31.409 { 00:16:31.409 "name": "BaseBdev3", 00:16:31.409 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:31.409 "is_configured": true, 00:16:31.409 "data_offset": 0, 00:16:31.409 "data_size": 65536 00:16:31.409 }, 00:16:31.409 { 00:16:31.409 "name": "BaseBdev4", 00:16:31.409 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:31.409 "is_configured": true, 00:16:31.409 "data_offset": 0, 00:16:31.409 "data_size": 65536 00:16:31.409 } 00:16:31.409 ] 00:16:31.409 }' 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.409 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.670 "name": "raid_bdev1", 00:16:31.670 "uuid": "3d2d6375-513a-46cf-9e93-00fc5a695dc2", 00:16:31.670 "strip_size_kb": 0, 00:16:31.670 "state": "online", 00:16:31.670 "raid_level": "raid1", 00:16:31.670 "superblock": false, 00:16:31.670 "num_base_bdevs": 4, 00:16:31.670 "num_base_bdevs_discovered": 3, 00:16:31.670 "num_base_bdevs_operational": 3, 00:16:31.670 "base_bdevs_list": [ 00:16:31.670 { 00:16:31.670 "name": "spare", 00:16:31.670 "uuid": "fe5bcaad-14ba-53a2-a656-83e88f044607", 00:16:31.670 "is_configured": true, 00:16:31.670 "data_offset": 0, 00:16:31.670 "data_size": 65536 00:16:31.670 }, 00:16:31.670 { 00:16:31.670 "name": null, 00:16:31.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.670 "is_configured": false, 00:16:31.670 "data_offset": 0, 00:16:31.670 "data_size": 65536 00:16:31.670 }, 00:16:31.670 { 00:16:31.670 "name": "BaseBdev3", 00:16:31.670 "uuid": "b3fb1f8c-95b5-540c-be68-fec7c6e40d2c", 00:16:31.670 "is_configured": true, 00:16:31.670 "data_offset": 0, 00:16:31.670 "data_size": 65536 00:16:31.670 }, 00:16:31.670 { 00:16:31.670 "name": "BaseBdev4", 00:16:31.670 "uuid": "e676de36-b826-5949-ba8a-8ece6fba344e", 00:16:31.670 "is_configured": true, 00:16:31.670 "data_offset": 0, 00:16:31.670 "data_size": 65536 00:16:31.670 } 00:16:31.670 ] 00:16:31.670 }' 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.670 14:42:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.241 [2024-11-04 14:42:31.103332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.241 [2024-11-04 14:42:31.103509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.241 [2024-11-04 14:42:31.103719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.241 [2024-11-04 14:42:31.103957] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.241 [2024-11-04 14:42:31.104110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:32.241 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:32.500 /dev/nbd0 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.500 1+0 records in 00:16:32.500 1+0 records out 00:16:32.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041686 s, 9.8 MB/s 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:32.500 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:32.760 /dev/nbd1 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.760 1+0 records in 00:16:32.760 1+0 records out 00:16:32.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434442 s, 9.4 MB/s 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:32.760 14:42:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:33.018 14:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:33.018 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.018 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:33.018 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:33.018 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:33.018 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.018 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:33.276 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:33.276 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:33.276 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:33.276 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.276 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.276 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:33.276 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:33.276 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.276 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.276 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77788 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77788 ']' 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77788 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:33.535 14:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77788 00:16:33.794 killing process with pid 77788 00:16:33.794 Received shutdown signal, test time was about 60.000000 seconds 00:16:33.794 00:16:33.794 Latency(us) 00:16:33.794 [2024-11-04T14:42:32.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.794 [2024-11-04T14:42:32.917Z] =================================================================================================================== 00:16:33.794 [2024-11-04T14:42:32.917Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:33.794 14:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:33.794 14:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:33.794 14:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77788' 00:16:33.794 14:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77788 00:16:33.794 [2024-11-04 14:42:32.665311] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.794 14:42:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77788 00:16:34.053 [2024-11-04 14:42:33.113914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.426 ************************************ 00:16:35.426 END TEST raid_rebuild_test 00:16:35.426 ************************************ 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:35.426 00:16:35.426 real 0m20.747s 00:16:35.426 user 0m23.223s 00:16:35.426 sys 0m3.408s 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.426 14:42:34 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:16:35.426 14:42:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:35.426 14:42:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:35.426 14:42:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.426 ************************************ 00:16:35.426 START TEST raid_rebuild_test_sb 00:16:35.426 ************************************ 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.426 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:35.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78272 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78272 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78272 ']' 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:35.427 14:42:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.427 [2024-11-04 14:42:34.317463] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:16:35.427 [2024-11-04 14:42:34.317906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78272 ] 00:16:35.427 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:35.427 Zero copy mechanism will not be used. 00:16:35.427 [2024-11-04 14:42:34.503769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.685 [2024-11-04 14:42:34.649525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.942 [2024-11-04 14:42:34.855591] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.942 [2024-11-04 14:42:34.855666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.509 BaseBdev1_malloc 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.509 [2024-11-04 14:42:35.412243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:36.509 [2024-11-04 14:42:35.412501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.509 [2024-11-04 14:42:35.412579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:36.509 [2024-11-04 14:42:35.412769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.509 [2024-11-04 14:42:35.415545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.509 [2024-11-04 14:42:35.415609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:36.509 BaseBdev1 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.509 BaseBdev2_malloc 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.509 [2024-11-04 14:42:35.461037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:36.509 [2024-11-04 14:42:35.461243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.509 [2024-11-04 14:42:35.461420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:36.509 [2024-11-04 14:42:35.461454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.509 [2024-11-04 14:42:35.464162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.509 [2024-11-04 14:42:35.464211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:36.509 BaseBdev2 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.509 BaseBdev3_malloc 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.509 [2024-11-04 14:42:35.523317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:36.509 [2024-11-04 14:42:35.523521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.509 [2024-11-04 14:42:35.523665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:36.509 [2024-11-04 14:42:35.523698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.509 [2024-11-04 14:42:35.526505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.509 [2024-11-04 14:42:35.526557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:36.509 BaseBdev3 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.509 BaseBdev4_malloc 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.509 [2024-11-04 14:42:35.571976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:36.509 [2024-11-04 14:42:35.572165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.509 [2024-11-04 14:42:35.572203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:36.509 [2024-11-04 14:42:35.572224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.509 [2024-11-04 14:42:35.574967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.509 [2024-11-04 14:42:35.575021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:36.509 BaseBdev4 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.509 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:36.510 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.510 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.510 spare_malloc 00:16:36.510 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.510 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:36.510 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.510 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.510 spare_delay 00:16:36.510 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.510 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:36.510 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.510 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.510 [2024-11-04 14:42:35.628130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:36.510 [2024-11-04 14:42:35.628206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.510 [2024-11-04 14:42:35.628237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:36.510 [2024-11-04 14:42:35.628257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.768 [2024-11-04 14:42:35.631083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.768 [2024-11-04 14:42:35.631258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:36.768 spare 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.768 [2024-11-04 14:42:35.636217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.768 [2024-11-04 14:42:35.638719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:36.768 [2024-11-04 14:42:35.638959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.768 [2024-11-04 14:42:35.639091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:36.768 [2024-11-04 14:42:35.639380] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:36.768 [2024-11-04 14:42:35.639445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:36.768 [2024-11-04 14:42:35.639853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:36.768 [2024-11-04 14:42:35.640229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:36.768 [2024-11-04 14:42:35.640355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:36.768 [2024-11-04 14:42:35.640614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.768 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.768 "name": "raid_bdev1", 00:16:36.768 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:36.768 "strip_size_kb": 0, 00:16:36.768 "state": "online", 00:16:36.768 "raid_level": "raid1", 00:16:36.768 "superblock": true, 00:16:36.768 "num_base_bdevs": 4, 00:16:36.768 "num_base_bdevs_discovered": 4, 00:16:36.768 "num_base_bdevs_operational": 4, 00:16:36.768 "base_bdevs_list": [ 00:16:36.768 { 00:16:36.768 "name": "BaseBdev1", 00:16:36.768 "uuid": "c9e7c16f-b95c-5cee-a409-9fcf22763517", 00:16:36.768 "is_configured": true, 00:16:36.768 "data_offset": 2048, 00:16:36.768 "data_size": 63488 00:16:36.768 }, 00:16:36.768 { 00:16:36.768 "name": "BaseBdev2", 00:16:36.768 "uuid": "a0e9e78f-45b8-5333-bb2e-23c143c8aaec", 00:16:36.768 "is_configured": true, 00:16:36.768 "data_offset": 2048, 00:16:36.768 "data_size": 63488 00:16:36.769 }, 00:16:36.769 { 00:16:36.769 "name": "BaseBdev3", 00:16:36.769 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:36.769 "is_configured": true, 00:16:36.769 "data_offset": 2048, 00:16:36.769 "data_size": 63488 00:16:36.769 }, 00:16:36.769 { 00:16:36.769 "name": "BaseBdev4", 00:16:36.769 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:36.769 "is_configured": true, 00:16:36.769 "data_offset": 2048, 00:16:36.769 "data_size": 63488 00:16:36.769 } 00:16:36.769 ] 00:16:36.769 }' 00:16:36.769 14:42:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.769 14:42:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.335 [2024-11-04 14:42:36.169191] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.335 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:37.593 [2024-11-04 14:42:36.584908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:37.593 /dev/nbd0 00:16:37.593 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.594 1+0 records in 00:16:37.594 1+0 records out 00:16:37.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065257 s, 6.3 MB/s 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:37.594 14:42:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:45.701 63488+0 records in 00:16:45.701 63488+0 records out 00:16:45.701 32505856 bytes (33 MB, 31 MiB) copied, 8.15802 s, 4.0 MB/s 00:16:45.701 14:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:45.701 14:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.701 14:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:45.701 14:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:45.701 14:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:45.701 14:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.701 14:42:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:46.268 [2024-11-04 14:42:45.124720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.268 [2024-11-04 14:42:45.136842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.268 14:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.269 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.269 14:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.269 14:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.269 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:46.269 "name": "raid_bdev1", 00:16:46.269 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:46.269 "strip_size_kb": 0, 00:16:46.269 "state": "online", 00:16:46.269 "raid_level": "raid1", 00:16:46.269 "superblock": true, 00:16:46.269 "num_base_bdevs": 4, 00:16:46.269 "num_base_bdevs_discovered": 3, 00:16:46.269 "num_base_bdevs_operational": 3, 00:16:46.269 "base_bdevs_list": [ 00:16:46.269 { 00:16:46.269 "name": null, 00:16:46.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.269 "is_configured": false, 00:16:46.269 "data_offset": 0, 00:16:46.269 "data_size": 63488 00:16:46.269 }, 00:16:46.269 { 00:16:46.269 "name": "BaseBdev2", 00:16:46.269 "uuid": "a0e9e78f-45b8-5333-bb2e-23c143c8aaec", 00:16:46.269 "is_configured": true, 00:16:46.269 "data_offset": 2048, 00:16:46.269 "data_size": 63488 00:16:46.269 }, 00:16:46.269 { 00:16:46.269 "name": "BaseBdev3", 00:16:46.269 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:46.269 "is_configured": true, 00:16:46.269 "data_offset": 2048, 00:16:46.269 "data_size": 63488 00:16:46.269 }, 00:16:46.269 { 00:16:46.269 "name": "BaseBdev4", 00:16:46.269 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:46.269 "is_configured": true, 00:16:46.269 "data_offset": 2048, 00:16:46.269 "data_size": 63488 00:16:46.269 } 00:16:46.269 ] 00:16:46.269 }' 00:16:46.269 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:46.269 14:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.527 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:46.527 14:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.527 14:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.785 [2024-11-04 14:42:45.648984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.785 [2024-11-04 14:42:45.663128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:16:46.785 14:42:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.785 14:42:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:46.785 [2024-11-04 14:42:45.665663] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.718 "name": "raid_bdev1", 00:16:47.718 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:47.718 "strip_size_kb": 0, 00:16:47.718 "state": "online", 00:16:47.718 "raid_level": "raid1", 00:16:47.718 "superblock": true, 00:16:47.718 "num_base_bdevs": 4, 00:16:47.718 "num_base_bdevs_discovered": 4, 00:16:47.718 "num_base_bdevs_operational": 4, 00:16:47.718 "process": { 00:16:47.718 "type": "rebuild", 00:16:47.718 "target": "spare", 00:16:47.718 "progress": { 00:16:47.718 "blocks": 20480, 00:16:47.718 "percent": 32 00:16:47.718 } 00:16:47.718 }, 00:16:47.718 "base_bdevs_list": [ 00:16:47.718 { 00:16:47.718 "name": "spare", 00:16:47.718 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:47.718 "is_configured": true, 00:16:47.718 "data_offset": 2048, 00:16:47.718 "data_size": 63488 00:16:47.718 }, 00:16:47.718 { 00:16:47.718 "name": "BaseBdev2", 00:16:47.718 "uuid": "a0e9e78f-45b8-5333-bb2e-23c143c8aaec", 00:16:47.718 "is_configured": true, 00:16:47.718 "data_offset": 2048, 00:16:47.718 "data_size": 63488 00:16:47.718 }, 00:16:47.718 { 00:16:47.718 "name": "BaseBdev3", 00:16:47.718 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:47.718 "is_configured": true, 00:16:47.718 "data_offset": 2048, 00:16:47.718 "data_size": 63488 00:16:47.718 }, 00:16:47.718 { 00:16:47.718 "name": "BaseBdev4", 00:16:47.718 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:47.718 "is_configured": true, 00:16:47.718 "data_offset": 2048, 00:16:47.718 "data_size": 63488 00:16:47.718 } 00:16:47.718 ] 00:16:47.718 }' 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.718 14:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.718 [2024-11-04 14:42:46.831322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.980 [2024-11-04 14:42:46.874556] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.980 [2024-11-04 14:42:46.874858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.980 [2024-11-04 14:42:46.874890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.980 [2024-11-04 14:42:46.874906] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.980 "name": "raid_bdev1", 00:16:47.980 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:47.980 "strip_size_kb": 0, 00:16:47.980 "state": "online", 00:16:47.980 "raid_level": "raid1", 00:16:47.980 "superblock": true, 00:16:47.980 "num_base_bdevs": 4, 00:16:47.980 "num_base_bdevs_discovered": 3, 00:16:47.980 "num_base_bdevs_operational": 3, 00:16:47.980 "base_bdevs_list": [ 00:16:47.980 { 00:16:47.980 "name": null, 00:16:47.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.980 "is_configured": false, 00:16:47.980 "data_offset": 0, 00:16:47.980 "data_size": 63488 00:16:47.980 }, 00:16:47.980 { 00:16:47.980 "name": "BaseBdev2", 00:16:47.980 "uuid": "a0e9e78f-45b8-5333-bb2e-23c143c8aaec", 00:16:47.980 "is_configured": true, 00:16:47.980 "data_offset": 2048, 00:16:47.980 "data_size": 63488 00:16:47.980 }, 00:16:47.980 { 00:16:47.980 "name": "BaseBdev3", 00:16:47.980 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:47.980 "is_configured": true, 00:16:47.980 "data_offset": 2048, 00:16:47.980 "data_size": 63488 00:16:47.980 }, 00:16:47.980 { 00:16:47.980 "name": "BaseBdev4", 00:16:47.980 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:47.980 "is_configured": true, 00:16:47.980 "data_offset": 2048, 00:16:47.980 "data_size": 63488 00:16:47.980 } 00:16:47.980 ] 00:16:47.980 }' 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.980 14:42:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.546 "name": "raid_bdev1", 00:16:48.546 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:48.546 "strip_size_kb": 0, 00:16:48.546 "state": "online", 00:16:48.546 "raid_level": "raid1", 00:16:48.546 "superblock": true, 00:16:48.546 "num_base_bdevs": 4, 00:16:48.546 "num_base_bdevs_discovered": 3, 00:16:48.546 "num_base_bdevs_operational": 3, 00:16:48.546 "base_bdevs_list": [ 00:16:48.546 { 00:16:48.546 "name": null, 00:16:48.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.546 "is_configured": false, 00:16:48.546 "data_offset": 0, 00:16:48.546 "data_size": 63488 00:16:48.546 }, 00:16:48.546 { 00:16:48.546 "name": "BaseBdev2", 00:16:48.546 "uuid": "a0e9e78f-45b8-5333-bb2e-23c143c8aaec", 00:16:48.546 "is_configured": true, 00:16:48.546 "data_offset": 2048, 00:16:48.546 "data_size": 63488 00:16:48.546 }, 00:16:48.546 { 00:16:48.546 "name": "BaseBdev3", 00:16:48.546 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:48.546 "is_configured": true, 00:16:48.546 "data_offset": 2048, 00:16:48.546 "data_size": 63488 00:16:48.546 }, 00:16:48.546 { 00:16:48.546 "name": "BaseBdev4", 00:16:48.546 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:48.546 "is_configured": true, 00:16:48.546 "data_offset": 2048, 00:16:48.546 "data_size": 63488 00:16:48.546 } 00:16:48.546 ] 00:16:48.546 }' 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.546 [2024-11-04 14:42:47.534659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:48.546 [2024-11-04 14:42:47.548151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.546 14:42:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:48.546 [2024-11-04 14:42:47.550763] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:49.480 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.480 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.480 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.480 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.480 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.480 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.480 14:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 14:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.480 14:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.739 "name": "raid_bdev1", 00:16:49.739 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:49.739 "strip_size_kb": 0, 00:16:49.739 "state": "online", 00:16:49.739 "raid_level": "raid1", 00:16:49.739 "superblock": true, 00:16:49.739 "num_base_bdevs": 4, 00:16:49.739 "num_base_bdevs_discovered": 4, 00:16:49.739 "num_base_bdevs_operational": 4, 00:16:49.739 "process": { 00:16:49.739 "type": "rebuild", 00:16:49.739 "target": "spare", 00:16:49.739 "progress": { 00:16:49.739 "blocks": 20480, 00:16:49.739 "percent": 32 00:16:49.739 } 00:16:49.739 }, 00:16:49.739 "base_bdevs_list": [ 00:16:49.739 { 00:16:49.739 "name": "spare", 00:16:49.739 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:49.739 "is_configured": true, 00:16:49.739 "data_offset": 2048, 00:16:49.739 "data_size": 63488 00:16:49.739 }, 00:16:49.739 { 00:16:49.739 "name": "BaseBdev2", 00:16:49.739 "uuid": "a0e9e78f-45b8-5333-bb2e-23c143c8aaec", 00:16:49.739 "is_configured": true, 00:16:49.739 "data_offset": 2048, 00:16:49.739 "data_size": 63488 00:16:49.739 }, 00:16:49.739 { 00:16:49.739 "name": "BaseBdev3", 00:16:49.739 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:49.739 "is_configured": true, 00:16:49.739 "data_offset": 2048, 00:16:49.739 "data_size": 63488 00:16:49.739 }, 00:16:49.739 { 00:16:49.739 "name": "BaseBdev4", 00:16:49.739 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:49.739 "is_configured": true, 00:16:49.739 "data_offset": 2048, 00:16:49.739 "data_size": 63488 00:16:49.739 } 00:16:49.739 ] 00:16:49.739 }' 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:49.739 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.739 14:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.739 [2024-11-04 14:42:48.724268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:49.739 [2024-11-04 14:42:48.859562] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:16:49.997 14:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.997 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:49.997 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:49.997 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.997 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.997 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.997 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.998 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.998 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.998 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.998 14:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.998 14:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.998 14:42:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.998 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.998 "name": "raid_bdev1", 00:16:49.998 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:49.998 "strip_size_kb": 0, 00:16:49.998 "state": "online", 00:16:49.998 "raid_level": "raid1", 00:16:49.998 "superblock": true, 00:16:49.998 "num_base_bdevs": 4, 00:16:49.998 "num_base_bdevs_discovered": 3, 00:16:49.998 "num_base_bdevs_operational": 3, 00:16:49.998 "process": { 00:16:49.998 "type": "rebuild", 00:16:49.998 "target": "spare", 00:16:49.998 "progress": { 00:16:49.998 "blocks": 24576, 00:16:49.998 "percent": 38 00:16:49.998 } 00:16:49.998 }, 00:16:49.998 "base_bdevs_list": [ 00:16:49.998 { 00:16:49.998 "name": "spare", 00:16:49.998 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:49.998 "is_configured": true, 00:16:49.998 "data_offset": 2048, 00:16:49.998 "data_size": 63488 00:16:49.998 }, 00:16:49.998 { 00:16:49.998 "name": null, 00:16:49.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.998 "is_configured": false, 00:16:49.998 "data_offset": 0, 00:16:49.998 "data_size": 63488 00:16:49.998 }, 00:16:49.998 { 00:16:49.998 "name": "BaseBdev3", 00:16:49.998 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:49.998 "is_configured": true, 00:16:49.998 "data_offset": 2048, 00:16:49.998 "data_size": 63488 00:16:49.998 }, 00:16:49.998 { 00:16:49.998 "name": "BaseBdev4", 00:16:49.998 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:49.998 "is_configured": true, 00:16:49.998 "data_offset": 2048, 00:16:49.998 "data_size": 63488 00:16:49.998 } 00:16:49.998 ] 00:16:49.998 }' 00:16:49.998 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.998 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:49.998 14:42:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=502 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.998 "name": "raid_bdev1", 00:16:49.998 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:49.998 "strip_size_kb": 0, 00:16:49.998 "state": "online", 00:16:49.998 "raid_level": "raid1", 00:16:49.998 "superblock": true, 00:16:49.998 "num_base_bdevs": 4, 00:16:49.998 "num_base_bdevs_discovered": 3, 00:16:49.998 "num_base_bdevs_operational": 3, 00:16:49.998 "process": { 00:16:49.998 "type": "rebuild", 00:16:49.998 "target": "spare", 00:16:49.998 "progress": { 00:16:49.998 "blocks": 26624, 00:16:49.998 "percent": 41 00:16:49.998 } 00:16:49.998 }, 00:16:49.998 "base_bdevs_list": [ 00:16:49.998 { 00:16:49.998 "name": "spare", 00:16:49.998 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:49.998 "is_configured": true, 00:16:49.998 "data_offset": 2048, 00:16:49.998 "data_size": 63488 00:16:49.998 }, 00:16:49.998 { 00:16:49.998 "name": null, 00:16:49.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.998 "is_configured": false, 00:16:49.998 "data_offset": 0, 00:16:49.998 "data_size": 63488 00:16:49.998 }, 00:16:49.998 { 00:16:49.998 "name": "BaseBdev3", 00:16:49.998 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:49.998 "is_configured": true, 00:16:49.998 "data_offset": 2048, 00:16:49.998 "data_size": 63488 00:16:49.998 }, 00:16:49.998 { 00:16:49.998 "name": "BaseBdev4", 00:16:49.998 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:49.998 "is_configured": true, 00:16:49.998 "data_offset": 2048, 00:16:49.998 "data_size": 63488 00:16:49.998 } 00:16:49.998 ] 00:16:49.998 }' 00:16:49.998 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.257 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.257 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.257 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.257 14:42:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.221 "name": "raid_bdev1", 00:16:51.221 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:51.221 "strip_size_kb": 0, 00:16:51.221 "state": "online", 00:16:51.221 "raid_level": "raid1", 00:16:51.221 "superblock": true, 00:16:51.221 "num_base_bdevs": 4, 00:16:51.221 "num_base_bdevs_discovered": 3, 00:16:51.221 "num_base_bdevs_operational": 3, 00:16:51.221 "process": { 00:16:51.221 "type": "rebuild", 00:16:51.221 "target": "spare", 00:16:51.221 "progress": { 00:16:51.221 "blocks": 51200, 00:16:51.221 "percent": 80 00:16:51.221 } 00:16:51.221 }, 00:16:51.221 "base_bdevs_list": [ 00:16:51.221 { 00:16:51.221 "name": "spare", 00:16:51.221 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:51.221 "is_configured": true, 00:16:51.221 "data_offset": 2048, 00:16:51.221 "data_size": 63488 00:16:51.221 }, 00:16:51.221 { 00:16:51.221 "name": null, 00:16:51.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.221 "is_configured": false, 00:16:51.221 "data_offset": 0, 00:16:51.221 "data_size": 63488 00:16:51.221 }, 00:16:51.221 { 00:16:51.221 "name": "BaseBdev3", 00:16:51.221 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:51.221 "is_configured": true, 00:16:51.221 "data_offset": 2048, 00:16:51.221 "data_size": 63488 00:16:51.221 }, 00:16:51.221 { 00:16:51.221 "name": "BaseBdev4", 00:16:51.221 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:51.221 "is_configured": true, 00:16:51.221 "data_offset": 2048, 00:16:51.221 "data_size": 63488 00:16:51.221 } 00:16:51.221 ] 00:16:51.221 }' 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:51.221 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.479 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:51.479 14:42:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:51.737 [2024-11-04 14:42:50.773481] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:51.737 [2024-11-04 14:42:50.774752] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:51.737 [2024-11-04 14:42:50.774951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.303 "name": "raid_bdev1", 00:16:52.303 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:52.303 "strip_size_kb": 0, 00:16:52.303 "state": "online", 00:16:52.303 "raid_level": "raid1", 00:16:52.303 "superblock": true, 00:16:52.303 "num_base_bdevs": 4, 00:16:52.303 "num_base_bdevs_discovered": 3, 00:16:52.303 "num_base_bdevs_operational": 3, 00:16:52.303 "base_bdevs_list": [ 00:16:52.303 { 00:16:52.303 "name": "spare", 00:16:52.303 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:52.303 "is_configured": true, 00:16:52.303 "data_offset": 2048, 00:16:52.303 "data_size": 63488 00:16:52.303 }, 00:16:52.303 { 00:16:52.303 "name": null, 00:16:52.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.303 "is_configured": false, 00:16:52.303 "data_offset": 0, 00:16:52.303 "data_size": 63488 00:16:52.303 }, 00:16:52.303 { 00:16:52.303 "name": "BaseBdev3", 00:16:52.303 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:52.303 "is_configured": true, 00:16:52.303 "data_offset": 2048, 00:16:52.303 "data_size": 63488 00:16:52.303 }, 00:16:52.303 { 00:16:52.303 "name": "BaseBdev4", 00:16:52.303 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:52.303 "is_configured": true, 00:16:52.303 "data_offset": 2048, 00:16:52.303 "data_size": 63488 00:16:52.303 } 00:16:52.303 ] 00:16:52.303 }' 00:16:52.303 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.563 "name": "raid_bdev1", 00:16:52.563 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:52.563 "strip_size_kb": 0, 00:16:52.563 "state": "online", 00:16:52.563 "raid_level": "raid1", 00:16:52.563 "superblock": true, 00:16:52.563 "num_base_bdevs": 4, 00:16:52.563 "num_base_bdevs_discovered": 3, 00:16:52.563 "num_base_bdevs_operational": 3, 00:16:52.563 "base_bdevs_list": [ 00:16:52.563 { 00:16:52.563 "name": "spare", 00:16:52.563 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:52.563 "is_configured": true, 00:16:52.563 "data_offset": 2048, 00:16:52.563 "data_size": 63488 00:16:52.563 }, 00:16:52.563 { 00:16:52.563 "name": null, 00:16:52.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.563 "is_configured": false, 00:16:52.563 "data_offset": 0, 00:16:52.563 "data_size": 63488 00:16:52.563 }, 00:16:52.563 { 00:16:52.563 "name": "BaseBdev3", 00:16:52.563 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:52.563 "is_configured": true, 00:16:52.563 "data_offset": 2048, 00:16:52.563 "data_size": 63488 00:16:52.563 }, 00:16:52.563 { 00:16:52.563 "name": "BaseBdev4", 00:16:52.563 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:52.563 "is_configured": true, 00:16:52.563 "data_offset": 2048, 00:16:52.563 "data_size": 63488 00:16:52.563 } 00:16:52.563 ] 00:16:52.563 }' 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:52.563 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.824 "name": "raid_bdev1", 00:16:52.824 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:52.824 "strip_size_kb": 0, 00:16:52.824 "state": "online", 00:16:52.824 "raid_level": "raid1", 00:16:52.824 "superblock": true, 00:16:52.824 "num_base_bdevs": 4, 00:16:52.824 "num_base_bdevs_discovered": 3, 00:16:52.824 "num_base_bdevs_operational": 3, 00:16:52.824 "base_bdevs_list": [ 00:16:52.824 { 00:16:52.824 "name": "spare", 00:16:52.824 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:52.824 "is_configured": true, 00:16:52.824 "data_offset": 2048, 00:16:52.824 "data_size": 63488 00:16:52.824 }, 00:16:52.824 { 00:16:52.824 "name": null, 00:16:52.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.824 "is_configured": false, 00:16:52.824 "data_offset": 0, 00:16:52.824 "data_size": 63488 00:16:52.824 }, 00:16:52.824 { 00:16:52.824 "name": "BaseBdev3", 00:16:52.824 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:52.824 "is_configured": true, 00:16:52.824 "data_offset": 2048, 00:16:52.824 "data_size": 63488 00:16:52.824 }, 00:16:52.824 { 00:16:52.824 "name": "BaseBdev4", 00:16:52.824 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:52.824 "is_configured": true, 00:16:52.824 "data_offset": 2048, 00:16:52.824 "data_size": 63488 00:16:52.824 } 00:16:52.824 ] 00:16:52.824 }' 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.824 14:42:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.085 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:53.085 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.085 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.085 [2024-11-04 14:42:52.198774] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.085 [2024-11-04 14:42:52.198969] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:53.085 [2024-11-04 14:42:52.199185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.085 [2024-11-04 14:42:52.199410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.085 [2024-11-04 14:42:52.199437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:53.085 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:53.344 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:53.603 /dev/nbd0 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.603 1+0 records in 00:16:53.603 1+0 records out 00:16:53.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332113 s, 12.3 MB/s 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:53.603 14:42:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:54.170 /dev/nbd1 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:54.170 1+0 records in 00:16:54.170 1+0 records out 00:16:54.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368275 s, 11.1 MB/s 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:54.170 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:54.737 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:54.737 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:54.737 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:54.737 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:54.737 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:54.737 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:54.737 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:54.737 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:54.737 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:54.737 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.995 [2024-11-04 14:42:53.903845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:54.995 [2024-11-04 14:42:53.904117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.995 [2024-11-04 14:42:53.904395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:54.995 [2024-11-04 14:42:53.904423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.995 [2024-11-04 14:42:53.907449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.995 [2024-11-04 14:42:53.907506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:54.995 [2024-11-04 14:42:53.907650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:54.995 [2024-11-04 14:42:53.907716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.995 [2024-11-04 14:42:53.907909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.995 [2024-11-04 14:42:53.908140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:54.995 spare 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.995 14:42:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.995 [2024-11-04 14:42:54.008293] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:54.995 [2024-11-04 14:42:54.008614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:54.995 [2024-11-04 14:42:54.009107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:54.995 [2024-11-04 14:42:54.009382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:54.995 [2024-11-04 14:42:54.009404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:54.995 [2024-11-04 14:42:54.009626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.995 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.995 "name": "raid_bdev1", 00:16:54.995 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:54.995 "strip_size_kb": 0, 00:16:54.995 "state": "online", 00:16:54.995 "raid_level": "raid1", 00:16:54.996 "superblock": true, 00:16:54.996 "num_base_bdevs": 4, 00:16:54.996 "num_base_bdevs_discovered": 3, 00:16:54.996 "num_base_bdevs_operational": 3, 00:16:54.996 "base_bdevs_list": [ 00:16:54.996 { 00:16:54.996 "name": "spare", 00:16:54.996 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:54.996 "is_configured": true, 00:16:54.996 "data_offset": 2048, 00:16:54.996 "data_size": 63488 00:16:54.996 }, 00:16:54.996 { 00:16:54.996 "name": null, 00:16:54.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.996 "is_configured": false, 00:16:54.996 "data_offset": 2048, 00:16:54.996 "data_size": 63488 00:16:54.996 }, 00:16:54.996 { 00:16:54.996 "name": "BaseBdev3", 00:16:54.996 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:54.996 "is_configured": true, 00:16:54.996 "data_offset": 2048, 00:16:54.996 "data_size": 63488 00:16:54.996 }, 00:16:54.996 { 00:16:54.996 "name": "BaseBdev4", 00:16:54.996 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:54.996 "is_configured": true, 00:16:54.996 "data_offset": 2048, 00:16:54.996 "data_size": 63488 00:16:54.996 } 00:16:54.996 ] 00:16:54.996 }' 00:16:54.996 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.996 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.561 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.562 "name": "raid_bdev1", 00:16:55.562 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:55.562 "strip_size_kb": 0, 00:16:55.562 "state": "online", 00:16:55.562 "raid_level": "raid1", 00:16:55.562 "superblock": true, 00:16:55.562 "num_base_bdevs": 4, 00:16:55.562 "num_base_bdevs_discovered": 3, 00:16:55.562 "num_base_bdevs_operational": 3, 00:16:55.562 "base_bdevs_list": [ 00:16:55.562 { 00:16:55.562 "name": "spare", 00:16:55.562 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:55.562 "is_configured": true, 00:16:55.562 "data_offset": 2048, 00:16:55.562 "data_size": 63488 00:16:55.562 }, 00:16:55.562 { 00:16:55.562 "name": null, 00:16:55.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.562 "is_configured": false, 00:16:55.562 "data_offset": 2048, 00:16:55.562 "data_size": 63488 00:16:55.562 }, 00:16:55.562 { 00:16:55.562 "name": "BaseBdev3", 00:16:55.562 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:55.562 "is_configured": true, 00:16:55.562 "data_offset": 2048, 00:16:55.562 "data_size": 63488 00:16:55.562 }, 00:16:55.562 { 00:16:55.562 "name": "BaseBdev4", 00:16:55.562 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:55.562 "is_configured": true, 00:16:55.562 "data_offset": 2048, 00:16:55.562 "data_size": 63488 00:16:55.562 } 00:16:55.562 ] 00:16:55.562 }' 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.562 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.562 [2024-11-04 14:42:54.680337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.821 "name": "raid_bdev1", 00:16:55.821 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:55.821 "strip_size_kb": 0, 00:16:55.821 "state": "online", 00:16:55.821 "raid_level": "raid1", 00:16:55.821 "superblock": true, 00:16:55.821 "num_base_bdevs": 4, 00:16:55.821 "num_base_bdevs_discovered": 2, 00:16:55.821 "num_base_bdevs_operational": 2, 00:16:55.821 "base_bdevs_list": [ 00:16:55.821 { 00:16:55.821 "name": null, 00:16:55.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.821 "is_configured": false, 00:16:55.821 "data_offset": 0, 00:16:55.821 "data_size": 63488 00:16:55.821 }, 00:16:55.821 { 00:16:55.821 "name": null, 00:16:55.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.821 "is_configured": false, 00:16:55.821 "data_offset": 2048, 00:16:55.821 "data_size": 63488 00:16:55.821 }, 00:16:55.821 { 00:16:55.821 "name": "BaseBdev3", 00:16:55.821 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:55.821 "is_configured": true, 00:16:55.821 "data_offset": 2048, 00:16:55.821 "data_size": 63488 00:16:55.821 }, 00:16:55.821 { 00:16:55.821 "name": "BaseBdev4", 00:16:55.821 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:55.821 "is_configured": true, 00:16:55.821 "data_offset": 2048, 00:16:55.821 "data_size": 63488 00:16:55.821 } 00:16:55.821 ] 00:16:55.821 }' 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.821 14:42:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.079 14:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:56.079 14:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.079 14:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.079 [2024-11-04 14:42:55.148452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.079 [2024-11-04 14:42:55.148692] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:56.079 [2024-11-04 14:42:55.148718] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:56.079 [2024-11-04 14:42:55.148770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.079 [2024-11-04 14:42:55.161967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:16:56.079 14:42:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.079 14:42:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:56.079 [2024-11-04 14:42:55.164559] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.480 "name": "raid_bdev1", 00:16:57.480 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:57.480 "strip_size_kb": 0, 00:16:57.480 "state": "online", 00:16:57.480 "raid_level": "raid1", 00:16:57.480 "superblock": true, 00:16:57.480 "num_base_bdevs": 4, 00:16:57.480 "num_base_bdevs_discovered": 3, 00:16:57.480 "num_base_bdevs_operational": 3, 00:16:57.480 "process": { 00:16:57.480 "type": "rebuild", 00:16:57.480 "target": "spare", 00:16:57.480 "progress": { 00:16:57.480 "blocks": 20480, 00:16:57.480 "percent": 32 00:16:57.480 } 00:16:57.480 }, 00:16:57.480 "base_bdevs_list": [ 00:16:57.480 { 00:16:57.480 "name": "spare", 00:16:57.480 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:57.480 "is_configured": true, 00:16:57.480 "data_offset": 2048, 00:16:57.480 "data_size": 63488 00:16:57.480 }, 00:16:57.480 { 00:16:57.480 "name": null, 00:16:57.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.480 "is_configured": false, 00:16:57.480 "data_offset": 2048, 00:16:57.480 "data_size": 63488 00:16:57.480 }, 00:16:57.480 { 00:16:57.480 "name": "BaseBdev3", 00:16:57.480 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:57.480 "is_configured": true, 00:16:57.480 "data_offset": 2048, 00:16:57.480 "data_size": 63488 00:16:57.480 }, 00:16:57.480 { 00:16:57.480 "name": "BaseBdev4", 00:16:57.480 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:57.480 "is_configured": true, 00:16:57.480 "data_offset": 2048, 00:16:57.480 "data_size": 63488 00:16:57.480 } 00:16:57.480 ] 00:16:57.480 }' 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.480 [2024-11-04 14:42:56.349746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.480 [2024-11-04 14:42:56.373605] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.480 [2024-11-04 14:42:56.373974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.480 [2024-11-04 14:42:56.374014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.480 [2024-11-04 14:42:56.374027] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.480 "name": "raid_bdev1", 00:16:57.480 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:57.480 "strip_size_kb": 0, 00:16:57.480 "state": "online", 00:16:57.480 "raid_level": "raid1", 00:16:57.480 "superblock": true, 00:16:57.480 "num_base_bdevs": 4, 00:16:57.480 "num_base_bdevs_discovered": 2, 00:16:57.480 "num_base_bdevs_operational": 2, 00:16:57.480 "base_bdevs_list": [ 00:16:57.480 { 00:16:57.480 "name": null, 00:16:57.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.480 "is_configured": false, 00:16:57.480 "data_offset": 0, 00:16:57.480 "data_size": 63488 00:16:57.480 }, 00:16:57.480 { 00:16:57.480 "name": null, 00:16:57.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.480 "is_configured": false, 00:16:57.480 "data_offset": 2048, 00:16:57.480 "data_size": 63488 00:16:57.480 }, 00:16:57.480 { 00:16:57.480 "name": "BaseBdev3", 00:16:57.480 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:57.480 "is_configured": true, 00:16:57.480 "data_offset": 2048, 00:16:57.480 "data_size": 63488 00:16:57.480 }, 00:16:57.480 { 00:16:57.480 "name": "BaseBdev4", 00:16:57.480 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:57.480 "is_configured": true, 00:16:57.480 "data_offset": 2048, 00:16:57.480 "data_size": 63488 00:16:57.480 } 00:16:57.480 ] 00:16:57.480 }' 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.480 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.047 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:58.047 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.047 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.047 [2024-11-04 14:42:56.881858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:58.047 [2024-11-04 14:42:56.882088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.047 [2024-11-04 14:42:56.882142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:58.047 [2024-11-04 14:42:56.882160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.047 [2024-11-04 14:42:56.882763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.047 [2024-11-04 14:42:56.882789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:58.047 [2024-11-04 14:42:56.882921] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:58.047 [2024-11-04 14:42:56.882962] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:58.047 [2024-11-04 14:42:56.882983] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:58.047 [2024-11-04 14:42:56.883023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.047 spare 00:16:58.047 [2024-11-04 14:42:56.896185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:58.047 14:42:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.047 14:42:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:58.047 [2024-11-04 14:42:56.898691] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.986 "name": "raid_bdev1", 00:16:58.986 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:58.986 "strip_size_kb": 0, 00:16:58.986 "state": "online", 00:16:58.986 "raid_level": "raid1", 00:16:58.986 "superblock": true, 00:16:58.986 "num_base_bdevs": 4, 00:16:58.986 "num_base_bdevs_discovered": 3, 00:16:58.986 "num_base_bdevs_operational": 3, 00:16:58.986 "process": { 00:16:58.986 "type": "rebuild", 00:16:58.986 "target": "spare", 00:16:58.986 "progress": { 00:16:58.986 "blocks": 20480, 00:16:58.986 "percent": 32 00:16:58.986 } 00:16:58.986 }, 00:16:58.986 "base_bdevs_list": [ 00:16:58.986 { 00:16:58.986 "name": "spare", 00:16:58.986 "uuid": "d4fb31a9-43e3-59bc-a213-87a5fa75fd9c", 00:16:58.986 "is_configured": true, 00:16:58.986 "data_offset": 2048, 00:16:58.986 "data_size": 63488 00:16:58.986 }, 00:16:58.986 { 00:16:58.986 "name": null, 00:16:58.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.986 "is_configured": false, 00:16:58.986 "data_offset": 2048, 00:16:58.986 "data_size": 63488 00:16:58.986 }, 00:16:58.986 { 00:16:58.986 "name": "BaseBdev3", 00:16:58.986 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:58.986 "is_configured": true, 00:16:58.986 "data_offset": 2048, 00:16:58.986 "data_size": 63488 00:16:58.986 }, 00:16:58.986 { 00:16:58.986 "name": "BaseBdev4", 00:16:58.986 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:58.986 "is_configured": true, 00:16:58.986 "data_offset": 2048, 00:16:58.986 "data_size": 63488 00:16:58.986 } 00:16:58.986 ] 00:16:58.986 }' 00:16:58.986 14:42:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.986 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.986 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.986 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.986 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:58.986 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.986 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.986 [2024-11-04 14:42:58.063753] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.245 [2024-11-04 14:42:58.107541] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:59.245 [2024-11-04 14:42:58.107810] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.245 [2024-11-04 14:42:58.107841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:59.245 [2024-11-04 14:42:58.107857] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.245 "name": "raid_bdev1", 00:16:59.245 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:59.245 "strip_size_kb": 0, 00:16:59.245 "state": "online", 00:16:59.245 "raid_level": "raid1", 00:16:59.245 "superblock": true, 00:16:59.245 "num_base_bdevs": 4, 00:16:59.245 "num_base_bdevs_discovered": 2, 00:16:59.245 "num_base_bdevs_operational": 2, 00:16:59.245 "base_bdevs_list": [ 00:16:59.245 { 00:16:59.245 "name": null, 00:16:59.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.245 "is_configured": false, 00:16:59.245 "data_offset": 0, 00:16:59.245 "data_size": 63488 00:16:59.245 }, 00:16:59.245 { 00:16:59.245 "name": null, 00:16:59.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.245 "is_configured": false, 00:16:59.245 "data_offset": 2048, 00:16:59.245 "data_size": 63488 00:16:59.245 }, 00:16:59.245 { 00:16:59.245 "name": "BaseBdev3", 00:16:59.245 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:59.245 "is_configured": true, 00:16:59.245 "data_offset": 2048, 00:16:59.245 "data_size": 63488 00:16:59.245 }, 00:16:59.245 { 00:16:59.245 "name": "BaseBdev4", 00:16:59.245 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:59.245 "is_configured": true, 00:16:59.245 "data_offset": 2048, 00:16:59.245 "data_size": 63488 00:16:59.245 } 00:16:59.245 ] 00:16:59.245 }' 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.245 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.812 "name": "raid_bdev1", 00:16:59.812 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:16:59.812 "strip_size_kb": 0, 00:16:59.812 "state": "online", 00:16:59.812 "raid_level": "raid1", 00:16:59.812 "superblock": true, 00:16:59.812 "num_base_bdevs": 4, 00:16:59.812 "num_base_bdevs_discovered": 2, 00:16:59.812 "num_base_bdevs_operational": 2, 00:16:59.812 "base_bdevs_list": [ 00:16:59.812 { 00:16:59.812 "name": null, 00:16:59.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.812 "is_configured": false, 00:16:59.812 "data_offset": 0, 00:16:59.812 "data_size": 63488 00:16:59.812 }, 00:16:59.812 { 00:16:59.812 "name": null, 00:16:59.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.812 "is_configured": false, 00:16:59.812 "data_offset": 2048, 00:16:59.812 "data_size": 63488 00:16:59.812 }, 00:16:59.812 { 00:16:59.812 "name": "BaseBdev3", 00:16:59.812 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:16:59.812 "is_configured": true, 00:16:59.812 "data_offset": 2048, 00:16:59.812 "data_size": 63488 00:16:59.812 }, 00:16:59.812 { 00:16:59.812 "name": "BaseBdev4", 00:16:59.812 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:16:59.812 "is_configured": true, 00:16:59.812 "data_offset": 2048, 00:16:59.812 "data_size": 63488 00:16:59.812 } 00:16:59.812 ] 00:16:59.812 }' 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.812 [2024-11-04 14:42:58.835831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:59.812 [2024-11-04 14:42:58.836076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.812 [2024-11-04 14:42:58.836116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:59.812 [2024-11-04 14:42:58.836137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.812 [2024-11-04 14:42:58.836732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.812 [2024-11-04 14:42:58.836770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:59.812 [2024-11-04 14:42:58.836870] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:59.812 [2024-11-04 14:42:58.836897] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:59.812 [2024-11-04 14:42:58.836910] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:59.812 [2024-11-04 14:42:58.836955] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:59.812 BaseBdev1 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.812 14:42:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.747 14:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.005 14:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.005 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.005 "name": "raid_bdev1", 00:17:01.005 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:17:01.005 "strip_size_kb": 0, 00:17:01.005 "state": "online", 00:17:01.005 "raid_level": "raid1", 00:17:01.005 "superblock": true, 00:17:01.005 "num_base_bdevs": 4, 00:17:01.005 "num_base_bdevs_discovered": 2, 00:17:01.005 "num_base_bdevs_operational": 2, 00:17:01.005 "base_bdevs_list": [ 00:17:01.005 { 00:17:01.005 "name": null, 00:17:01.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.005 "is_configured": false, 00:17:01.005 "data_offset": 0, 00:17:01.005 "data_size": 63488 00:17:01.005 }, 00:17:01.005 { 00:17:01.005 "name": null, 00:17:01.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.005 "is_configured": false, 00:17:01.005 "data_offset": 2048, 00:17:01.005 "data_size": 63488 00:17:01.005 }, 00:17:01.005 { 00:17:01.005 "name": "BaseBdev3", 00:17:01.005 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:17:01.005 "is_configured": true, 00:17:01.005 "data_offset": 2048, 00:17:01.005 "data_size": 63488 00:17:01.005 }, 00:17:01.005 { 00:17:01.005 "name": "BaseBdev4", 00:17:01.005 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:17:01.005 "is_configured": true, 00:17:01.005 "data_offset": 2048, 00:17:01.005 "data_size": 63488 00:17:01.005 } 00:17:01.005 ] 00:17:01.005 }' 00:17:01.005 14:42:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.005 14:42:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.572 "name": "raid_bdev1", 00:17:01.572 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:17:01.572 "strip_size_kb": 0, 00:17:01.572 "state": "online", 00:17:01.572 "raid_level": "raid1", 00:17:01.572 "superblock": true, 00:17:01.572 "num_base_bdevs": 4, 00:17:01.572 "num_base_bdevs_discovered": 2, 00:17:01.572 "num_base_bdevs_operational": 2, 00:17:01.572 "base_bdevs_list": [ 00:17:01.572 { 00:17:01.572 "name": null, 00:17:01.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.572 "is_configured": false, 00:17:01.572 "data_offset": 0, 00:17:01.572 "data_size": 63488 00:17:01.572 }, 00:17:01.572 { 00:17:01.572 "name": null, 00:17:01.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.572 "is_configured": false, 00:17:01.572 "data_offset": 2048, 00:17:01.572 "data_size": 63488 00:17:01.572 }, 00:17:01.572 { 00:17:01.572 "name": "BaseBdev3", 00:17:01.572 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:17:01.572 "is_configured": true, 00:17:01.572 "data_offset": 2048, 00:17:01.572 "data_size": 63488 00:17:01.572 }, 00:17:01.572 { 00:17:01.572 "name": "BaseBdev4", 00:17:01.572 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:17:01.572 "is_configured": true, 00:17:01.572 "data_offset": 2048, 00:17:01.572 "data_size": 63488 00:17:01.572 } 00:17:01.572 ] 00:17:01.572 }' 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.572 [2024-11-04 14:43:00.556407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.572 [2024-11-04 14:43:00.556632] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:01.572 [2024-11-04 14:43:00.556652] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:01.572 request: 00:17:01.572 { 00:17:01.572 "base_bdev": "BaseBdev1", 00:17:01.572 "raid_bdev": "raid_bdev1", 00:17:01.572 "method": "bdev_raid_add_base_bdev", 00:17:01.572 "req_id": 1 00:17:01.572 } 00:17:01.572 Got JSON-RPC error response 00:17:01.572 response: 00:17:01.572 { 00:17:01.572 "code": -22, 00:17:01.572 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:01.572 } 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:01.572 14:43:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.551 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.551 "name": "raid_bdev1", 00:17:02.551 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:17:02.552 "strip_size_kb": 0, 00:17:02.552 "state": "online", 00:17:02.552 "raid_level": "raid1", 00:17:02.552 "superblock": true, 00:17:02.552 "num_base_bdevs": 4, 00:17:02.552 "num_base_bdevs_discovered": 2, 00:17:02.552 "num_base_bdevs_operational": 2, 00:17:02.552 "base_bdevs_list": [ 00:17:02.552 { 00:17:02.552 "name": null, 00:17:02.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.552 "is_configured": false, 00:17:02.552 "data_offset": 0, 00:17:02.552 "data_size": 63488 00:17:02.552 }, 00:17:02.552 { 00:17:02.552 "name": null, 00:17:02.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.552 "is_configured": false, 00:17:02.552 "data_offset": 2048, 00:17:02.552 "data_size": 63488 00:17:02.552 }, 00:17:02.552 { 00:17:02.552 "name": "BaseBdev3", 00:17:02.552 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:17:02.552 "is_configured": true, 00:17:02.552 "data_offset": 2048, 00:17:02.552 "data_size": 63488 00:17:02.552 }, 00:17:02.552 { 00:17:02.552 "name": "BaseBdev4", 00:17:02.552 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:17:02.552 "is_configured": true, 00:17:02.552 "data_offset": 2048, 00:17:02.552 "data_size": 63488 00:17:02.552 } 00:17:02.552 ] 00:17:02.552 }' 00:17:02.552 14:43:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.552 14:43:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.120 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.121 "name": "raid_bdev1", 00:17:03.121 "uuid": "4987366f-f62b-4e4d-89c7-4e421ef92d4e", 00:17:03.121 "strip_size_kb": 0, 00:17:03.121 "state": "online", 00:17:03.121 "raid_level": "raid1", 00:17:03.121 "superblock": true, 00:17:03.121 "num_base_bdevs": 4, 00:17:03.121 "num_base_bdevs_discovered": 2, 00:17:03.121 "num_base_bdevs_operational": 2, 00:17:03.121 "base_bdevs_list": [ 00:17:03.121 { 00:17:03.121 "name": null, 00:17:03.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.121 "is_configured": false, 00:17:03.121 "data_offset": 0, 00:17:03.121 "data_size": 63488 00:17:03.121 }, 00:17:03.121 { 00:17:03.121 "name": null, 00:17:03.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.121 "is_configured": false, 00:17:03.121 "data_offset": 2048, 00:17:03.121 "data_size": 63488 00:17:03.121 }, 00:17:03.121 { 00:17:03.121 "name": "BaseBdev3", 00:17:03.121 "uuid": "89dea7ea-7084-5900-8ac7-88bb88d64f13", 00:17:03.121 "is_configured": true, 00:17:03.121 "data_offset": 2048, 00:17:03.121 "data_size": 63488 00:17:03.121 }, 00:17:03.121 { 00:17:03.121 "name": "BaseBdev4", 00:17:03.121 "uuid": "89f5a6c5-4af3-5715-b0e0-197bfa64bdbc", 00:17:03.121 "is_configured": true, 00:17:03.121 "data_offset": 2048, 00:17:03.121 "data_size": 63488 00:17:03.121 } 00:17:03.121 ] 00:17:03.121 }' 00:17:03.121 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.121 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:03.121 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.121 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:03.121 14:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78272 00:17:03.121 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78272 ']' 00:17:03.121 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78272 00:17:03.121 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:03.121 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:03.121 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78272 00:17:03.380 killing process with pid 78272 00:17:03.380 Received shutdown signal, test time was about 60.000000 seconds 00:17:03.380 00:17:03.380 Latency(us) 00:17:03.380 [2024-11-04T14:43:02.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.380 [2024-11-04T14:43:02.503Z] =================================================================================================================== 00:17:03.380 [2024-11-04T14:43:02.503Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:03.380 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:03.380 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:03.380 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78272' 00:17:03.380 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78272 00:17:03.380 [2024-11-04 14:43:02.264489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.380 14:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78272 00:17:03.380 [2024-11-04 14:43:02.264623] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.380 [2024-11-04 14:43:02.264706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.380 [2024-11-04 14:43:02.264722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:03.639 [2024-11-04 14:43:02.696767] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:05.015 00:17:05.015 real 0m29.534s 00:17:05.015 user 0m36.352s 00:17:05.015 sys 0m4.149s 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.015 ************************************ 00:17:05.015 END TEST raid_rebuild_test_sb 00:17:05.015 ************************************ 00:17:05.015 14:43:03 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:17:05.015 14:43:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:05.015 14:43:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:05.015 14:43:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.015 ************************************ 00:17:05.015 START TEST raid_rebuild_test_io 00:17:05.015 ************************************ 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79066 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79066 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 79066 ']' 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:05.015 14:43:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.015 [2024-11-04 14:43:03.913193] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:17:05.015 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:05.015 Zero copy mechanism will not be used. 00:17:05.015 [2024-11-04 14:43:03.913548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79066 ] 00:17:05.015 [2024-11-04 14:43:04.105466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.274 [2024-11-04 14:43:04.263228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.532 [2024-11-04 14:43:04.467764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.532 [2024-11-04 14:43:04.468108] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.791 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:05.791 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:17:05.791 14:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:05.791 14:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:05.791 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.791 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 BaseBdev1_malloc 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 [2024-11-04 14:43:04.945765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:06.067 [2024-11-04 14:43:04.946061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.067 [2024-11-04 14:43:04.946138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.067 [2024-11-04 14:43:04.946337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.067 [2024-11-04 14:43:04.949256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.067 [2024-11-04 14:43:04.949461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:06.067 BaseBdev1 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 BaseBdev2_malloc 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.067 14:43:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 [2024-11-04 14:43:04.996961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:06.067 [2024-11-04 14:43:04.997224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.067 [2024-11-04 14:43:04.997435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.067 [2024-11-04 14:43:04.997585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.067 [2024-11-04 14:43:05.000336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.067 BaseBdev2 00:17:06.067 [2024-11-04 14:43:05.000517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 BaseBdev3_malloc 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 [2024-11-04 14:43:05.062625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:06.067 [2024-11-04 14:43:05.062839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.067 [2024-11-04 14:43:05.062913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.067 [2024-11-04 14:43:05.063172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.067 [2024-11-04 14:43:05.066100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.067 [2024-11-04 14:43:05.066255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:06.067 BaseBdev3 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 BaseBdev4_malloc 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 [2024-11-04 14:43:05.121642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:06.067 [2024-11-04 14:43:05.121851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.067 [2024-11-04 14:43:05.122015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:06.067 [2024-11-04 14:43:05.122048] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.067 [2024-11-04 14:43:05.125086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.067 [2024-11-04 14:43:05.125267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:06.067 BaseBdev4 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 spare_malloc 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 spare_delay 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.067 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.067 [2024-11-04 14:43:05.186275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.067 [2024-11-04 14:43:05.186525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.067 [2024-11-04 14:43:05.186564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:06.067 [2024-11-04 14:43:05.186583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.326 spare 00:17:06.326 [2024-11-04 14:43:05.189491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.326 [2024-11-04 14:43:05.189552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.327 [2024-11-04 14:43:05.194456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.327 [2024-11-04 14:43:05.196975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.327 [2024-11-04 14:43:05.197190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:06.327 [2024-11-04 14:43:05.197385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:06.327 [2024-11-04 14:43:05.197598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:06.327 [2024-11-04 14:43:05.197725] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:06.327 [2024-11-04 14:43:05.198181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:06.327 [2024-11-04 14:43:05.198542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:06.327 [2024-11-04 14:43:05.198661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:06.327 [2024-11-04 14:43:05.199043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.327 "name": "raid_bdev1", 00:17:06.327 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:06.327 "strip_size_kb": 0, 00:17:06.327 "state": "online", 00:17:06.327 "raid_level": "raid1", 00:17:06.327 "superblock": false, 00:17:06.327 "num_base_bdevs": 4, 00:17:06.327 "num_base_bdevs_discovered": 4, 00:17:06.327 "num_base_bdevs_operational": 4, 00:17:06.327 "base_bdevs_list": [ 00:17:06.327 { 00:17:06.327 "name": "BaseBdev1", 00:17:06.327 "uuid": "78821639-f9e8-5519-b239-0d71c129bdda", 00:17:06.327 "is_configured": true, 00:17:06.327 "data_offset": 0, 00:17:06.327 "data_size": 65536 00:17:06.327 }, 00:17:06.327 { 00:17:06.327 "name": "BaseBdev2", 00:17:06.327 "uuid": "eb39070c-c234-529c-aeb3-dd3be76c8124", 00:17:06.327 "is_configured": true, 00:17:06.327 "data_offset": 0, 00:17:06.327 "data_size": 65536 00:17:06.327 }, 00:17:06.327 { 00:17:06.327 "name": "BaseBdev3", 00:17:06.327 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:06.327 "is_configured": true, 00:17:06.327 "data_offset": 0, 00:17:06.327 "data_size": 65536 00:17:06.327 }, 00:17:06.327 { 00:17:06.327 "name": "BaseBdev4", 00:17:06.327 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:06.327 "is_configured": true, 00:17:06.327 "data_offset": 0, 00:17:06.327 "data_size": 65536 00:17:06.327 } 00:17:06.327 ] 00:17:06.327 }' 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.327 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.894 [2024-11-04 14:43:05.731660] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.894 [2024-11-04 14:43:05.835231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:06.894 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.895 "name": "raid_bdev1", 00:17:06.895 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:06.895 "strip_size_kb": 0, 00:17:06.895 "state": "online", 00:17:06.895 "raid_level": "raid1", 00:17:06.895 "superblock": false, 00:17:06.895 "num_base_bdevs": 4, 00:17:06.895 "num_base_bdevs_discovered": 3, 00:17:06.895 "num_base_bdevs_operational": 3, 00:17:06.895 "base_bdevs_list": [ 00:17:06.895 { 00:17:06.895 "name": null, 00:17:06.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.895 "is_configured": false, 00:17:06.895 "data_offset": 0, 00:17:06.895 "data_size": 65536 00:17:06.895 }, 00:17:06.895 { 00:17:06.895 "name": "BaseBdev2", 00:17:06.895 "uuid": "eb39070c-c234-529c-aeb3-dd3be76c8124", 00:17:06.895 "is_configured": true, 00:17:06.895 "data_offset": 0, 00:17:06.895 "data_size": 65536 00:17:06.895 }, 00:17:06.895 { 00:17:06.895 "name": "BaseBdev3", 00:17:06.895 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:06.895 "is_configured": true, 00:17:06.895 "data_offset": 0, 00:17:06.895 "data_size": 65536 00:17:06.895 }, 00:17:06.895 { 00:17:06.895 "name": "BaseBdev4", 00:17:06.895 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:06.895 "is_configured": true, 00:17:06.895 "data_offset": 0, 00:17:06.895 "data_size": 65536 00:17:06.895 } 00:17:06.895 ] 00:17:06.895 }' 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.895 14:43:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.895 [2024-11-04 14:43:05.963583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:06.895 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:06.895 Zero copy mechanism will not be used. 00:17:06.895 Running I/O for 60 seconds... 00:17:07.461 14:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:07.461 14:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.461 14:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.461 [2024-11-04 14:43:06.389495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:07.461 14:43:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.462 14:43:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:07.462 [2024-11-04 14:43:06.450503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:07.462 [2024-11-04 14:43:06.453551] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.462 [2024-11-04 14:43:06.566278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:07.462 [2024-11-04 14:43:06.568157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:07.720 [2024-11-04 14:43:06.782524] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:07.720 [2024-11-04 14:43:06.783150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:07.978 164.00 IOPS, 492.00 MiB/s [2024-11-04T14:43:07.102Z] [2024-11-04 14:43:07.068905] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.544 "name": "raid_bdev1", 00:17:08.544 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:08.544 "strip_size_kb": 0, 00:17:08.544 "state": "online", 00:17:08.544 "raid_level": "raid1", 00:17:08.544 "superblock": false, 00:17:08.544 "num_base_bdevs": 4, 00:17:08.544 "num_base_bdevs_discovered": 4, 00:17:08.544 "num_base_bdevs_operational": 4, 00:17:08.544 "process": { 00:17:08.544 "type": "rebuild", 00:17:08.544 "target": "spare", 00:17:08.544 "progress": { 00:17:08.544 "blocks": 12288, 00:17:08.544 "percent": 18 00:17:08.544 } 00:17:08.544 }, 00:17:08.544 "base_bdevs_list": [ 00:17:08.544 { 00:17:08.544 "name": "spare", 00:17:08.544 "uuid": "09c4a881-974a-5f03-a3cd-23b29f59a6b2", 00:17:08.544 "is_configured": true, 00:17:08.544 "data_offset": 0, 00:17:08.544 "data_size": 65536 00:17:08.544 }, 00:17:08.544 { 00:17:08.544 "name": "BaseBdev2", 00:17:08.544 "uuid": "eb39070c-c234-529c-aeb3-dd3be76c8124", 00:17:08.544 "is_configured": true, 00:17:08.544 "data_offset": 0, 00:17:08.544 "data_size": 65536 00:17:08.544 }, 00:17:08.544 { 00:17:08.544 "name": "BaseBdev3", 00:17:08.544 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:08.544 "is_configured": true, 00:17:08.544 "data_offset": 0, 00:17:08.544 "data_size": 65536 00:17:08.544 }, 00:17:08.544 { 00:17:08.544 "name": "BaseBdev4", 00:17:08.544 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:08.544 "is_configured": true, 00:17:08.544 "data_offset": 0, 00:17:08.544 "data_size": 65536 00:17:08.544 } 00:17:08.544 ] 00:17:08.544 }' 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.544 [2024-11-04 14:43:07.532617] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:08.544 [2024-11-04 14:43:07.534359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.544 14:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.544 [2024-11-04 14:43:07.596853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.544 [2024-11-04 14:43:07.644479] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:08.803 [2024-11-04 14:43:07.747461] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:08.803 [2024-11-04 14:43:07.761515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.803 [2024-11-04 14:43:07.761584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:08.803 [2024-11-04 14:43:07.761605] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:08.803 [2024-11-04 14:43:07.811673] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.803 "name": "raid_bdev1", 00:17:08.803 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:08.803 "strip_size_kb": 0, 00:17:08.803 "state": "online", 00:17:08.803 "raid_level": "raid1", 00:17:08.803 "superblock": false, 00:17:08.803 "num_base_bdevs": 4, 00:17:08.803 "num_base_bdevs_discovered": 3, 00:17:08.803 "num_base_bdevs_operational": 3, 00:17:08.803 "base_bdevs_list": [ 00:17:08.803 { 00:17:08.803 "name": null, 00:17:08.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.803 "is_configured": false, 00:17:08.803 "data_offset": 0, 00:17:08.803 "data_size": 65536 00:17:08.803 }, 00:17:08.803 { 00:17:08.803 "name": "BaseBdev2", 00:17:08.803 "uuid": "eb39070c-c234-529c-aeb3-dd3be76c8124", 00:17:08.803 "is_configured": true, 00:17:08.803 "data_offset": 0, 00:17:08.803 "data_size": 65536 00:17:08.803 }, 00:17:08.803 { 00:17:08.803 "name": "BaseBdev3", 00:17:08.803 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:08.803 "is_configured": true, 00:17:08.803 "data_offset": 0, 00:17:08.803 "data_size": 65536 00:17:08.803 }, 00:17:08.803 { 00:17:08.803 "name": "BaseBdev4", 00:17:08.803 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:08.803 "is_configured": true, 00:17:08.803 "data_offset": 0, 00:17:08.803 "data_size": 65536 00:17:08.803 } 00:17:08.803 ] 00:17:08.803 }' 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.803 14:43:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.319 125.00 IOPS, 375.00 MiB/s [2024-11-04T14:43:08.442Z] 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.319 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.319 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.319 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.319 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.319 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.319 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.319 14:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.319 14:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.319 14:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.577 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.577 "name": "raid_bdev1", 00:17:09.577 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:09.577 "strip_size_kb": 0, 00:17:09.577 "state": "online", 00:17:09.577 "raid_level": "raid1", 00:17:09.577 "superblock": false, 00:17:09.577 "num_base_bdevs": 4, 00:17:09.577 "num_base_bdevs_discovered": 3, 00:17:09.577 "num_base_bdevs_operational": 3, 00:17:09.577 "base_bdevs_list": [ 00:17:09.577 { 00:17:09.577 "name": null, 00:17:09.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.577 "is_configured": false, 00:17:09.577 "data_offset": 0, 00:17:09.577 "data_size": 65536 00:17:09.577 }, 00:17:09.577 { 00:17:09.577 "name": "BaseBdev2", 00:17:09.577 "uuid": "eb39070c-c234-529c-aeb3-dd3be76c8124", 00:17:09.577 "is_configured": true, 00:17:09.577 "data_offset": 0, 00:17:09.577 "data_size": 65536 00:17:09.577 }, 00:17:09.577 { 00:17:09.577 "name": "BaseBdev3", 00:17:09.577 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:09.577 "is_configured": true, 00:17:09.577 "data_offset": 0, 00:17:09.577 "data_size": 65536 00:17:09.577 }, 00:17:09.577 { 00:17:09.577 "name": "BaseBdev4", 00:17:09.577 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:09.577 "is_configured": true, 00:17:09.577 "data_offset": 0, 00:17:09.577 "data_size": 65536 00:17:09.577 } 00:17:09.577 ] 00:17:09.577 }' 00:17:09.577 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.577 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.577 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.577 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.577 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.577 14:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.577 14:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.577 [2024-11-04 14:43:08.573529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.577 14:43:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.577 14:43:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:09.577 [2024-11-04 14:43:08.641858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:09.577 [2024-11-04 14:43:08.644488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:09.836 [2024-11-04 14:43:08.773396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:09.836 [2024-11-04 14:43:08.775048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:10.353 137.67 IOPS, 413.00 MiB/s [2024-11-04T14:43:09.476Z] [2024-11-04 14:43:09.327820] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:10.611 [2024-11-04 14:43:09.582772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.611 "name": "raid_bdev1", 00:17:10.611 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:10.611 "strip_size_kb": 0, 00:17:10.611 "state": "online", 00:17:10.611 "raid_level": "raid1", 00:17:10.611 "superblock": false, 00:17:10.611 "num_base_bdevs": 4, 00:17:10.611 "num_base_bdevs_discovered": 4, 00:17:10.611 "num_base_bdevs_operational": 4, 00:17:10.611 "process": { 00:17:10.611 "type": "rebuild", 00:17:10.611 "target": "spare", 00:17:10.611 "progress": { 00:17:10.611 "blocks": 10240, 00:17:10.611 "percent": 15 00:17:10.611 } 00:17:10.611 }, 00:17:10.611 "base_bdevs_list": [ 00:17:10.611 { 00:17:10.611 "name": "spare", 00:17:10.611 "uuid": "09c4a881-974a-5f03-a3cd-23b29f59a6b2", 00:17:10.611 "is_configured": true, 00:17:10.611 "data_offset": 0, 00:17:10.611 "data_size": 65536 00:17:10.611 }, 00:17:10.611 { 00:17:10.611 "name": "BaseBdev2", 00:17:10.611 "uuid": "eb39070c-c234-529c-aeb3-dd3be76c8124", 00:17:10.611 "is_configured": true, 00:17:10.611 "data_offset": 0, 00:17:10.611 "data_size": 65536 00:17:10.611 }, 00:17:10.611 { 00:17:10.611 "name": "BaseBdev3", 00:17:10.611 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:10.611 "is_configured": true, 00:17:10.611 "data_offset": 0, 00:17:10.611 "data_size": 65536 00:17:10.611 }, 00:17:10.611 { 00:17:10.611 "name": "BaseBdev4", 00:17:10.611 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:10.611 "is_configured": true, 00:17:10.611 "data_offset": 0, 00:17:10.611 "data_size": 65536 00:17:10.611 } 00:17:10.611 ] 00:17:10.611 }' 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.611 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.869 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.869 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:10.869 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:10.869 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:10.869 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:10.869 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:10.869 14:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.869 14:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.869 [2024-11-04 14:43:09.784870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:10.869 [2024-11-04 14:43:09.927160] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:10.870 [2024-11-04 14:43:09.927476] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.870 14:43:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.127 117.50 IOPS, 352.50 MiB/s [2024-11-04T14:43:10.250Z] 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.127 "name": "raid_bdev1", 00:17:11.127 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:11.127 "strip_size_kb": 0, 00:17:11.127 "state": "online", 00:17:11.127 "raid_level": "raid1", 00:17:11.127 "superblock": false, 00:17:11.127 "num_base_bdevs": 4, 00:17:11.127 "num_base_bdevs_discovered": 3, 00:17:11.127 "num_base_bdevs_operational": 3, 00:17:11.127 "process": { 00:17:11.128 "type": "rebuild", 00:17:11.128 "target": "spare", 00:17:11.128 "progress": { 00:17:11.128 "blocks": 12288, 00:17:11.128 "percent": 18 00:17:11.128 } 00:17:11.128 }, 00:17:11.128 "base_bdevs_list": [ 00:17:11.128 { 00:17:11.128 "name": "spare", 00:17:11.128 "uuid": "09c4a881-974a-5f03-a3cd-23b29f59a6b2", 00:17:11.128 "is_configured": true, 00:17:11.128 "data_offset": 0, 00:17:11.128 "data_size": 65536 00:17:11.128 }, 00:17:11.128 { 00:17:11.128 "name": null, 00:17:11.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.128 "is_configured": false, 00:17:11.128 "data_offset": 0, 00:17:11.128 "data_size": 65536 00:17:11.128 }, 00:17:11.128 { 00:17:11.128 "name": "BaseBdev3", 00:17:11.128 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:11.128 "is_configured": true, 00:17:11.128 "data_offset": 0, 00:17:11.128 "data_size": 65536 00:17:11.128 }, 00:17:11.128 { 00:17:11.128 "name": "BaseBdev4", 00:17:11.128 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:11.128 "is_configured": true, 00:17:11.128 "data_offset": 0, 00:17:11.128 "data_size": 65536 00:17:11.128 } 00:17:11.128 ] 00:17:11.128 }' 00:17:11.128 14:43:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.128 [2024-11-04 14:43:10.051917] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=523 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.128 "name": "raid_bdev1", 00:17:11.128 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:11.128 "strip_size_kb": 0, 00:17:11.128 "state": "online", 00:17:11.128 "raid_level": "raid1", 00:17:11.128 "superblock": false, 00:17:11.128 "num_base_bdevs": 4, 00:17:11.128 "num_base_bdevs_discovered": 3, 00:17:11.128 "num_base_bdevs_operational": 3, 00:17:11.128 "process": { 00:17:11.128 "type": "rebuild", 00:17:11.128 "target": "spare", 00:17:11.128 "progress": { 00:17:11.128 "blocks": 14336, 00:17:11.128 "percent": 21 00:17:11.128 } 00:17:11.128 }, 00:17:11.128 "base_bdevs_list": [ 00:17:11.128 { 00:17:11.128 "name": "spare", 00:17:11.128 "uuid": "09c4a881-974a-5f03-a3cd-23b29f59a6b2", 00:17:11.128 "is_configured": true, 00:17:11.128 "data_offset": 0, 00:17:11.128 "data_size": 65536 00:17:11.128 }, 00:17:11.128 { 00:17:11.128 "name": null, 00:17:11.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.128 "is_configured": false, 00:17:11.128 "data_offset": 0, 00:17:11.128 "data_size": 65536 00:17:11.128 }, 00:17:11.128 { 00:17:11.128 "name": "BaseBdev3", 00:17:11.128 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:11.128 "is_configured": true, 00:17:11.128 "data_offset": 0, 00:17:11.128 "data_size": 65536 00:17:11.128 }, 00:17:11.128 { 00:17:11.128 "name": "BaseBdev4", 00:17:11.128 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:11.128 "is_configured": true, 00:17:11.128 "data_offset": 0, 00:17:11.128 "data_size": 65536 00:17:11.128 } 00:17:11.128 ] 00:17:11.128 }' 00:17:11.128 [2024-11-04 14:43:10.153761] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:11.128 [2024-11-04 14:43:10.154167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.128 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.386 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.386 14:43:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:11.386 [2024-11-04 14:43:10.396269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:11.644 [2024-11-04 14:43:10.543935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:11.644 [2024-11-04 14:43:10.544356] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:11.903 104.60 IOPS, 313.80 MiB/s [2024-11-04T14:43:11.026Z] [2024-11-04 14:43:11.008047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:17:12.162 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.162 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.162 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.162 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.162 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.162 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.162 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.162 14:43:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.162 14:43:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.162 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.420 14:43:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.420 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.420 "name": "raid_bdev1", 00:17:12.420 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:12.420 "strip_size_kb": 0, 00:17:12.420 "state": "online", 00:17:12.420 "raid_level": "raid1", 00:17:12.420 "superblock": false, 00:17:12.420 "num_base_bdevs": 4, 00:17:12.420 "num_base_bdevs_discovered": 3, 00:17:12.420 "num_base_bdevs_operational": 3, 00:17:12.420 "process": { 00:17:12.420 "type": "rebuild", 00:17:12.420 "target": "spare", 00:17:12.420 "progress": { 00:17:12.420 "blocks": 30720, 00:17:12.420 "percent": 46 00:17:12.420 } 00:17:12.420 }, 00:17:12.420 "base_bdevs_list": [ 00:17:12.420 { 00:17:12.420 "name": "spare", 00:17:12.420 "uuid": "09c4a881-974a-5f03-a3cd-23b29f59a6b2", 00:17:12.420 "is_configured": true, 00:17:12.420 "data_offset": 0, 00:17:12.420 "data_size": 65536 00:17:12.420 }, 00:17:12.420 { 00:17:12.420 "name": null, 00:17:12.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.420 "is_configured": false, 00:17:12.420 "data_offset": 0, 00:17:12.420 "data_size": 65536 00:17:12.420 }, 00:17:12.420 { 00:17:12.420 "name": "BaseBdev3", 00:17:12.420 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:12.420 "is_configured": true, 00:17:12.420 "data_offset": 0, 00:17:12.420 "data_size": 65536 00:17:12.420 }, 00:17:12.420 { 00:17:12.420 "name": "BaseBdev4", 00:17:12.420 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:12.420 "is_configured": true, 00:17:12.420 "data_offset": 0, 00:17:12.420 "data_size": 65536 00:17:12.420 } 00:17:12.420 ] 00:17:12.420 }' 00:17:12.420 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.420 [2024-11-04 14:43:11.344284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:12.420 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.421 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.421 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.421 14:43:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.678 [2024-11-04 14:43:11.563650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:12.678 [2024-11-04 14:43:11.564258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:13.504 95.00 IOPS, 285.00 MiB/s [2024-11-04T14:43:12.627Z] 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.504 "name": "raid_bdev1", 00:17:13.504 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:13.504 "strip_size_kb": 0, 00:17:13.504 "state": "online", 00:17:13.504 "raid_level": "raid1", 00:17:13.504 "superblock": false, 00:17:13.504 "num_base_bdevs": 4, 00:17:13.504 "num_base_bdevs_discovered": 3, 00:17:13.504 "num_base_bdevs_operational": 3, 00:17:13.504 "process": { 00:17:13.504 "type": "rebuild", 00:17:13.504 "target": "spare", 00:17:13.504 "progress": { 00:17:13.504 "blocks": 47104, 00:17:13.504 "percent": 71 00:17:13.504 } 00:17:13.504 }, 00:17:13.504 "base_bdevs_list": [ 00:17:13.504 { 00:17:13.504 "name": "spare", 00:17:13.504 "uuid": "09c4a881-974a-5f03-a3cd-23b29f59a6b2", 00:17:13.504 "is_configured": true, 00:17:13.504 "data_offset": 0, 00:17:13.504 "data_size": 65536 00:17:13.504 }, 00:17:13.504 { 00:17:13.504 "name": null, 00:17:13.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.504 "is_configured": false, 00:17:13.504 "data_offset": 0, 00:17:13.504 "data_size": 65536 00:17:13.504 }, 00:17:13.504 { 00:17:13.504 "name": "BaseBdev3", 00:17:13.504 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:13.504 "is_configured": true, 00:17:13.504 "data_offset": 0, 00:17:13.504 "data_size": 65536 00:17:13.504 }, 00:17:13.504 { 00:17:13.504 "name": "BaseBdev4", 00:17:13.504 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:13.504 "is_configured": true, 00:17:13.504 "data_offset": 0, 00:17:13.504 "data_size": 65536 00:17:13.504 } 00:17:13.504 ] 00:17:13.504 }' 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.504 [2024-11-04 14:43:12.596868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:13.504 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.762 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:13.762 14:43:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:14.278 89.00 IOPS, 267.00 MiB/s [2024-11-04T14:43:13.401Z] [2024-11-04 14:43:13.355467] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:14.537 [2024-11-04 14:43:13.455448] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:14.537 [2024-11-04 14:43:13.458811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.796 "name": "raid_bdev1", 00:17:14.796 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:14.796 "strip_size_kb": 0, 00:17:14.796 "state": "online", 00:17:14.796 "raid_level": "raid1", 00:17:14.796 "superblock": false, 00:17:14.796 "num_base_bdevs": 4, 00:17:14.796 "num_base_bdevs_discovered": 3, 00:17:14.796 "num_base_bdevs_operational": 3, 00:17:14.796 "base_bdevs_list": [ 00:17:14.796 { 00:17:14.796 "name": "spare", 00:17:14.796 "uuid": "09c4a881-974a-5f03-a3cd-23b29f59a6b2", 00:17:14.796 "is_configured": true, 00:17:14.796 "data_offset": 0, 00:17:14.796 "data_size": 65536 00:17:14.796 }, 00:17:14.796 { 00:17:14.796 "name": null, 00:17:14.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.796 "is_configured": false, 00:17:14.796 "data_offset": 0, 00:17:14.796 "data_size": 65536 00:17:14.796 }, 00:17:14.796 { 00:17:14.796 "name": "BaseBdev3", 00:17:14.796 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:14.796 "is_configured": true, 00:17:14.796 "data_offset": 0, 00:17:14.796 "data_size": 65536 00:17:14.796 }, 00:17:14.796 { 00:17:14.796 "name": "BaseBdev4", 00:17:14.796 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:14.796 "is_configured": true, 00:17:14.796 "data_offset": 0, 00:17:14.796 "data_size": 65536 00:17:14.796 } 00:17:14.796 ] 00:17:14.796 }' 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.796 "name": "raid_bdev1", 00:17:14.796 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:14.796 "strip_size_kb": 0, 00:17:14.796 "state": "online", 00:17:14.796 "raid_level": "raid1", 00:17:14.796 "superblock": false, 00:17:14.796 "num_base_bdevs": 4, 00:17:14.796 "num_base_bdevs_discovered": 3, 00:17:14.796 "num_base_bdevs_operational": 3, 00:17:14.796 "base_bdevs_list": [ 00:17:14.796 { 00:17:14.796 "name": "spare", 00:17:14.796 "uuid": "09c4a881-974a-5f03-a3cd-23b29f59a6b2", 00:17:14.796 "is_configured": true, 00:17:14.796 "data_offset": 0, 00:17:14.796 "data_size": 65536 00:17:14.796 }, 00:17:14.796 { 00:17:14.796 "name": null, 00:17:14.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.796 "is_configured": false, 00:17:14.796 "data_offset": 0, 00:17:14.796 "data_size": 65536 00:17:14.796 }, 00:17:14.796 { 00:17:14.796 "name": "BaseBdev3", 00:17:14.796 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:14.796 "is_configured": true, 00:17:14.796 "data_offset": 0, 00:17:14.796 "data_size": 65536 00:17:14.796 }, 00:17:14.796 { 00:17:14.796 "name": "BaseBdev4", 00:17:14.796 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:14.796 "is_configured": true, 00:17:14.796 "data_offset": 0, 00:17:14.796 "data_size": 65536 00:17:14.796 } 00:17:14.796 ] 00:17:14.796 }' 00:17:14.796 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.055 82.25 IOPS, 246.75 MiB/s [2024-11-04T14:43:14.178Z] 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.055 14:43:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.055 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.056 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.056 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.056 "name": "raid_bdev1", 00:17:15.056 "uuid": "41dab2cc-09c9-4919-bf36-c04a0d6bec36", 00:17:15.056 "strip_size_kb": 0, 00:17:15.056 "state": "online", 00:17:15.056 "raid_level": "raid1", 00:17:15.056 "superblock": false, 00:17:15.056 "num_base_bdevs": 4, 00:17:15.056 "num_base_bdevs_discovered": 3, 00:17:15.056 "num_base_bdevs_operational": 3, 00:17:15.056 "base_bdevs_list": [ 00:17:15.056 { 00:17:15.056 "name": "spare", 00:17:15.056 "uuid": "09c4a881-974a-5f03-a3cd-23b29f59a6b2", 00:17:15.056 "is_configured": true, 00:17:15.056 "data_offset": 0, 00:17:15.056 "data_size": 65536 00:17:15.056 }, 00:17:15.056 { 00:17:15.056 "name": null, 00:17:15.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.056 "is_configured": false, 00:17:15.056 "data_offset": 0, 00:17:15.056 "data_size": 65536 00:17:15.056 }, 00:17:15.056 { 00:17:15.056 "name": "BaseBdev3", 00:17:15.056 "uuid": "19d9792b-8d0b-5c04-8e67-28fea6838768", 00:17:15.056 "is_configured": true, 00:17:15.056 "data_offset": 0, 00:17:15.056 "data_size": 65536 00:17:15.056 }, 00:17:15.056 { 00:17:15.056 "name": "BaseBdev4", 00:17:15.056 "uuid": "09cbdc2d-e565-511e-b5a7-93d84ba4a4f2", 00:17:15.056 "is_configured": true, 00:17:15.056 "data_offset": 0, 00:17:15.056 "data_size": 65536 00:17:15.056 } 00:17:15.056 ] 00:17:15.056 }' 00:17:15.056 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.056 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.646 [2024-11-04 14:43:14.503195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.646 [2024-11-04 14:43:14.503230] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.646 00:17:15.646 Latency(us) 00:17:15.646 [2024-11-04T14:43:14.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.646 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:15.646 raid_bdev1 : 8.61 79.07 237.21 0.00 0.00 17446.56 294.17 120109.61 00:17:15.646 [2024-11-04T14:43:14.769Z] =================================================================================================================== 00:17:15.646 [2024-11-04T14:43:14.769Z] Total : 79.07 237.21 0.00 0.00 17446.56 294.17 120109.61 00:17:15.646 [2024-11-04 14:43:14.599650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.646 [2024-11-04 14:43:14.599740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.646 [2024-11-04 14:43:14.599882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.646 [2024-11-04 14:43:14.599900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:15.646 { 00:17:15.646 "results": [ 00:17:15.646 { 00:17:15.646 "job": "raid_bdev1", 00:17:15.646 "core_mask": "0x1", 00:17:15.646 "workload": "randrw", 00:17:15.646 "percentage": 50, 00:17:15.646 "status": "finished", 00:17:15.646 "queue_depth": 2, 00:17:15.646 "io_size": 3145728, 00:17:15.646 "runtime": 8.612569, 00:17:15.646 "iops": 79.07048407972115, 00:17:15.646 "mibps": 237.21145223916346, 00:17:15.646 "io_failed": 0, 00:17:15.646 "io_timeout": 0, 00:17:15.646 "avg_latency_us": 17446.55686023228, 00:17:15.646 "min_latency_us": 294.16727272727275, 00:17:15.646 "max_latency_us": 120109.61454545455 00:17:15.646 } 00:17:15.646 ], 00:17:15.646 "core_count": 1 00:17:15.646 } 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:15.646 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:15.905 /dev/nbd0 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:15.905 1+0 records in 00:17:15.905 1+0 records out 00:17:15.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270049 s, 15.2 MB/s 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:15.905 14:43:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:16.472 /dev/nbd1 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:16.472 1+0 records in 00:17:16.472 1+0 records out 00:17:16.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291521 s, 14.1 MB/s 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.472 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:16.473 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:16.473 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:16.473 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:16.473 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:16.731 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:16.732 14:43:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:17.298 /dev/nbd1 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:17.298 1+0 records in 00:17:17.298 1+0 records out 00:17:17.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472136 s, 8.7 MB/s 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.298 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:17.557 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:17.815 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:17.815 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:17.815 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:17.815 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:17.815 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:17.815 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:17.815 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:17.815 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:17.815 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:17.815 14:43:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79066 00:17:17.816 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 79066 ']' 00:17:17.816 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 79066 00:17:17.816 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:17:17.816 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:17.816 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79066 00:17:17.816 killing process with pid 79066 00:17:17.816 Received shutdown signal, test time was about 10.962860 seconds 00:17:17.816 00:17:17.816 Latency(us) 00:17:17.816 [2024-11-04T14:43:16.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.816 [2024-11-04T14:43:16.939Z] =================================================================================================================== 00:17:17.816 [2024-11-04T14:43:16.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:17.816 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:17.816 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:17.816 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79066' 00:17:17.816 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 79066 00:17:17.816 [2024-11-04 14:43:16.929416] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.816 14:43:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 79066 00:17:18.382 [2024-11-04 14:43:17.318499] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:19.316 ************************************ 00:17:19.316 END TEST raid_rebuild_test_io 00:17:19.316 ************************************ 00:17:19.316 14:43:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:19.316 00:17:19.316 real 0m14.614s 00:17:19.316 user 0m19.470s 00:17:19.316 sys 0m1.768s 00:17:19.316 14:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:19.316 14:43:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.575 14:43:18 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:17:19.575 14:43:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:19.575 14:43:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:19.575 14:43:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:19.575 ************************************ 00:17:19.575 START TEST raid_rebuild_test_sb_io 00:17:19.575 ************************************ 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:19.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79486 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79486 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79486 ']' 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:19.575 14:43:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.575 [2024-11-04 14:43:18.587208] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:17:19.575 [2024-11-04 14:43:18.587381] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79486 ] 00:17:19.575 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:19.575 Zero copy mechanism will not be used. 00:17:19.834 [2024-11-04 14:43:18.776783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.834 [2024-11-04 14:43:18.912231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.092 [2024-11-04 14:43:19.128755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.092 [2024-11-04 14:43:19.128831] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.660 BaseBdev1_malloc 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.660 [2024-11-04 14:43:19.605009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:20.660 [2024-11-04 14:43:19.605108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.660 [2024-11-04 14:43:19.605141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:20.660 [2024-11-04 14:43:19.605161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.660 [2024-11-04 14:43:19.608094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.660 [2024-11-04 14:43:19.608162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:20.660 BaseBdev1 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.660 BaseBdev2_malloc 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.660 [2024-11-04 14:43:19.662713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:20.660 [2024-11-04 14:43:19.662789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.660 [2024-11-04 14:43:19.662818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:20.660 [2024-11-04 14:43:19.662838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.660 [2024-11-04 14:43:19.665718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.660 [2024-11-04 14:43:19.665762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:20.660 BaseBdev2 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.660 BaseBdev3_malloc 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.660 [2024-11-04 14:43:19.736909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:20.660 [2024-11-04 14:43:19.737000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.660 [2024-11-04 14:43:19.737036] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:20.660 [2024-11-04 14:43:19.737055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.660 [2024-11-04 14:43:19.740134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.660 [2024-11-04 14:43:19.740183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:20.660 BaseBdev3 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:20.660 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:20.661 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.661 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.919 BaseBdev4_malloc 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.919 [2024-11-04 14:43:19.795294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:20.919 [2024-11-04 14:43:19.795371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.919 [2024-11-04 14:43:19.795403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:20.919 [2024-11-04 14:43:19.795422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.919 [2024-11-04 14:43:19.798427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.919 [2024-11-04 14:43:19.798490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:20.919 BaseBdev4 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.919 spare_malloc 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.919 spare_delay 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.919 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.919 [2024-11-04 14:43:19.865489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:20.919 [2024-11-04 14:43:19.865566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.920 [2024-11-04 14:43:19.865598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:20.920 [2024-11-04 14:43:19.865617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.920 [2024-11-04 14:43:19.868489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.920 [2024-11-04 14:43:19.868565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:20.920 spare 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.920 [2024-11-04 14:43:19.877613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.920 [2024-11-04 14:43:19.880219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.920 [2024-11-04 14:43:19.880325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.920 [2024-11-04 14:43:19.880410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:20.920 [2024-11-04 14:43:19.880673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:20.920 [2024-11-04 14:43:19.880710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:20.920 [2024-11-04 14:43:19.881074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:20.920 [2024-11-04 14:43:19.881329] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:20.920 [2024-11-04 14:43:19.881356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:20.920 [2024-11-04 14:43:19.881634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.920 "name": "raid_bdev1", 00:17:20.920 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:20.920 "strip_size_kb": 0, 00:17:20.920 "state": "online", 00:17:20.920 "raid_level": "raid1", 00:17:20.920 "superblock": true, 00:17:20.920 "num_base_bdevs": 4, 00:17:20.920 "num_base_bdevs_discovered": 4, 00:17:20.920 "num_base_bdevs_operational": 4, 00:17:20.920 "base_bdevs_list": [ 00:17:20.920 { 00:17:20.920 "name": "BaseBdev1", 00:17:20.920 "uuid": "ce4d8efb-0239-5b25-a080-d36b8991d725", 00:17:20.920 "is_configured": true, 00:17:20.920 "data_offset": 2048, 00:17:20.920 "data_size": 63488 00:17:20.920 }, 00:17:20.920 { 00:17:20.920 "name": "BaseBdev2", 00:17:20.920 "uuid": "c67c45e8-daa5-520c-859b-66be003e511b", 00:17:20.920 "is_configured": true, 00:17:20.920 "data_offset": 2048, 00:17:20.920 "data_size": 63488 00:17:20.920 }, 00:17:20.920 { 00:17:20.920 "name": "BaseBdev3", 00:17:20.920 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:20.920 "is_configured": true, 00:17:20.920 "data_offset": 2048, 00:17:20.920 "data_size": 63488 00:17:20.920 }, 00:17:20.920 { 00:17:20.920 "name": "BaseBdev4", 00:17:20.920 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:20.920 "is_configured": true, 00:17:20.920 "data_offset": 2048, 00:17:20.920 "data_size": 63488 00:17:20.920 } 00:17:20.920 ] 00:17:20.920 }' 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.920 14:43:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:21.486 [2024-11-04 14:43:20.434295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.486 [2024-11-04 14:43:20.533836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.486 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.486 "name": "raid_bdev1", 00:17:21.486 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:21.486 "strip_size_kb": 0, 00:17:21.486 "state": "online", 00:17:21.486 "raid_level": "raid1", 00:17:21.486 "superblock": true, 00:17:21.486 "num_base_bdevs": 4, 00:17:21.486 "num_base_bdevs_discovered": 3, 00:17:21.486 "num_base_bdevs_operational": 3, 00:17:21.486 "base_bdevs_list": [ 00:17:21.486 { 00:17:21.486 "name": null, 00:17:21.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.486 "is_configured": false, 00:17:21.486 "data_offset": 0, 00:17:21.486 "data_size": 63488 00:17:21.486 }, 00:17:21.486 { 00:17:21.486 "name": "BaseBdev2", 00:17:21.486 "uuid": "c67c45e8-daa5-520c-859b-66be003e511b", 00:17:21.486 "is_configured": true, 00:17:21.486 "data_offset": 2048, 00:17:21.486 "data_size": 63488 00:17:21.486 }, 00:17:21.486 { 00:17:21.487 "name": "BaseBdev3", 00:17:21.487 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:21.487 "is_configured": true, 00:17:21.487 "data_offset": 2048, 00:17:21.487 "data_size": 63488 00:17:21.487 }, 00:17:21.487 { 00:17:21.487 "name": "BaseBdev4", 00:17:21.487 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:21.487 "is_configured": true, 00:17:21.487 "data_offset": 2048, 00:17:21.487 "data_size": 63488 00:17:21.487 } 00:17:21.487 ] 00:17:21.487 }' 00:17:21.487 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.487 14:43:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.745 [2024-11-04 14:43:20.662202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:21.745 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:21.745 Zero copy mechanism will not be used. 00:17:21.745 Running I/O for 60 seconds... 00:17:22.002 14:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:22.002 14:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.002 14:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:22.003 [2024-11-04 14:43:21.100046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.275 14:43:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.275 14:43:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:22.275 [2024-11-04 14:43:21.171263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:22.275 [2024-11-04 14:43:21.174262] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:22.275 [2024-11-04 14:43:21.298826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:22.533 [2024-11-04 14:43:21.561844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:22.533 [2024-11-04 14:43:21.562303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:23.049 167.00 IOPS, 501.00 MiB/s [2024-11-04T14:43:22.172Z] [2024-11-04 14:43:21.932267] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:23.049 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.049 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.049 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.049 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.049 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.049 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.049 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.049 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.049 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.308 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.308 [2024-11-04 14:43:22.180725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:23.308 [2024-11-04 14:43:22.181157] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:23.308 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.308 "name": "raid_bdev1", 00:17:23.308 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:23.308 "strip_size_kb": 0, 00:17:23.308 "state": "online", 00:17:23.308 "raid_level": "raid1", 00:17:23.308 "superblock": true, 00:17:23.308 "num_base_bdevs": 4, 00:17:23.308 "num_base_bdevs_discovered": 4, 00:17:23.308 "num_base_bdevs_operational": 4, 00:17:23.308 "process": { 00:17:23.308 "type": "rebuild", 00:17:23.308 "target": "spare", 00:17:23.308 "progress": { 00:17:23.308 "blocks": 8192, 00:17:23.308 "percent": 12 00:17:23.308 } 00:17:23.308 }, 00:17:23.308 "base_bdevs_list": [ 00:17:23.308 { 00:17:23.308 "name": "spare", 00:17:23.308 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:23.308 "is_configured": true, 00:17:23.308 "data_offset": 2048, 00:17:23.308 "data_size": 63488 00:17:23.308 }, 00:17:23.308 { 00:17:23.308 "name": "BaseBdev2", 00:17:23.308 "uuid": "c67c45e8-daa5-520c-859b-66be003e511b", 00:17:23.308 "is_configured": true, 00:17:23.308 "data_offset": 2048, 00:17:23.308 "data_size": 63488 00:17:23.308 }, 00:17:23.308 { 00:17:23.308 "name": "BaseBdev3", 00:17:23.308 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:23.308 "is_configured": true, 00:17:23.308 "data_offset": 2048, 00:17:23.308 "data_size": 63488 00:17:23.308 }, 00:17:23.308 { 00:17:23.308 "name": "BaseBdev4", 00:17:23.308 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:23.308 "is_configured": true, 00:17:23.308 "data_offset": 2048, 00:17:23.308 "data_size": 63488 00:17:23.308 } 00:17:23.308 ] 00:17:23.308 }' 00:17:23.308 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.308 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.308 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.308 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.308 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:23.308 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.308 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.308 [2024-11-04 14:43:22.335211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.567 [2024-11-04 14:43:22.436091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:23.567 [2024-11-04 14:43:22.539207] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:23.567 [2024-11-04 14:43:22.560443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.567 [2024-11-04 14:43:22.560543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.567 [2024-11-04 14:43:22.560564] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:23.567 [2024-11-04 14:43:22.592750] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.567 "name": "raid_bdev1", 00:17:23.567 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:23.567 "strip_size_kb": 0, 00:17:23.567 "state": "online", 00:17:23.567 "raid_level": "raid1", 00:17:23.567 "superblock": true, 00:17:23.567 "num_base_bdevs": 4, 00:17:23.567 "num_base_bdevs_discovered": 3, 00:17:23.567 "num_base_bdevs_operational": 3, 00:17:23.567 "base_bdevs_list": [ 00:17:23.567 { 00:17:23.567 "name": null, 00:17:23.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.567 "is_configured": false, 00:17:23.567 "data_offset": 0, 00:17:23.567 "data_size": 63488 00:17:23.567 }, 00:17:23.567 { 00:17:23.567 "name": "BaseBdev2", 00:17:23.567 "uuid": "c67c45e8-daa5-520c-859b-66be003e511b", 00:17:23.567 "is_configured": true, 00:17:23.567 "data_offset": 2048, 00:17:23.567 "data_size": 63488 00:17:23.567 }, 00:17:23.567 { 00:17:23.567 "name": "BaseBdev3", 00:17:23.567 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:23.567 "is_configured": true, 00:17:23.567 "data_offset": 2048, 00:17:23.567 "data_size": 63488 00:17:23.567 }, 00:17:23.567 { 00:17:23.567 "name": "BaseBdev4", 00:17:23.567 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:23.567 "is_configured": true, 00:17:23.567 "data_offset": 2048, 00:17:23.567 "data_size": 63488 00:17:23.567 } 00:17:23.567 ] 00:17:23.567 }' 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.567 14:43:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.083 112.00 IOPS, 336.00 MiB/s [2024-11-04T14:43:23.206Z] 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.083 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.083 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.083 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.083 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.083 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.083 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.083 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.083 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.083 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.083 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.083 "name": "raid_bdev1", 00:17:24.083 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:24.083 "strip_size_kb": 0, 00:17:24.083 "state": "online", 00:17:24.083 "raid_level": "raid1", 00:17:24.083 "superblock": true, 00:17:24.083 "num_base_bdevs": 4, 00:17:24.083 "num_base_bdevs_discovered": 3, 00:17:24.083 "num_base_bdevs_operational": 3, 00:17:24.083 "base_bdevs_list": [ 00:17:24.083 { 00:17:24.083 "name": null, 00:17:24.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.083 "is_configured": false, 00:17:24.083 "data_offset": 0, 00:17:24.083 "data_size": 63488 00:17:24.083 }, 00:17:24.083 { 00:17:24.083 "name": "BaseBdev2", 00:17:24.083 "uuid": "c67c45e8-daa5-520c-859b-66be003e511b", 00:17:24.083 "is_configured": true, 00:17:24.083 "data_offset": 2048, 00:17:24.083 "data_size": 63488 00:17:24.083 }, 00:17:24.083 { 00:17:24.083 "name": "BaseBdev3", 00:17:24.083 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:24.083 "is_configured": true, 00:17:24.083 "data_offset": 2048, 00:17:24.083 "data_size": 63488 00:17:24.083 }, 00:17:24.083 { 00:17:24.083 "name": "BaseBdev4", 00:17:24.083 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:24.083 "is_configured": true, 00:17:24.084 "data_offset": 2048, 00:17:24.084 "data_size": 63488 00:17:24.084 } 00:17:24.084 ] 00:17:24.084 }' 00:17:24.084 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.342 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.342 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.342 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.342 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:24.342 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.342 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:24.342 [2024-11-04 14:43:23.319595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:24.342 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.342 14:43:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:24.342 [2024-11-04 14:43:23.387067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:24.342 [2024-11-04 14:43:23.389724] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:24.601 [2024-11-04 14:43:23.517501] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:24.601 [2024-11-04 14:43:23.519338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:24.859 126.00 IOPS, 378.00 MiB/s [2024-11-04T14:43:23.982Z] [2024-11-04 14:43:23.764004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:24.859 [2024-11-04 14:43:23.764882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:25.117 [2024-11-04 14:43:24.094529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:25.375 [2024-11-04 14:43:24.306128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:25.375 [2024-11-04 14:43:24.307147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.375 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.375 "name": "raid_bdev1", 00:17:25.375 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:25.375 "strip_size_kb": 0, 00:17:25.375 "state": "online", 00:17:25.375 "raid_level": "raid1", 00:17:25.375 "superblock": true, 00:17:25.375 "num_base_bdevs": 4, 00:17:25.375 "num_base_bdevs_discovered": 4, 00:17:25.375 "num_base_bdevs_operational": 4, 00:17:25.375 "process": { 00:17:25.375 "type": "rebuild", 00:17:25.375 "target": "spare", 00:17:25.375 "progress": { 00:17:25.375 "blocks": 10240, 00:17:25.376 "percent": 16 00:17:25.376 } 00:17:25.376 }, 00:17:25.376 "base_bdevs_list": [ 00:17:25.376 { 00:17:25.376 "name": "spare", 00:17:25.376 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:25.376 "is_configured": true, 00:17:25.376 "data_offset": 2048, 00:17:25.376 "data_size": 63488 00:17:25.376 }, 00:17:25.376 { 00:17:25.376 "name": "BaseBdev2", 00:17:25.376 "uuid": "c67c45e8-daa5-520c-859b-66be003e511b", 00:17:25.376 "is_configured": true, 00:17:25.376 "data_offset": 2048, 00:17:25.376 "data_size": 63488 00:17:25.376 }, 00:17:25.376 { 00:17:25.376 "name": "BaseBdev3", 00:17:25.376 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:25.376 "is_configured": true, 00:17:25.376 "data_offset": 2048, 00:17:25.376 "data_size": 63488 00:17:25.376 }, 00:17:25.376 { 00:17:25.376 "name": "BaseBdev4", 00:17:25.376 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:25.376 "is_configured": true, 00:17:25.376 "data_offset": 2048, 00:17:25.376 "data_size": 63488 00:17:25.376 } 00:17:25.376 ] 00:17:25.376 }' 00:17:25.376 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.376 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.376 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:25.634 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.634 [2024-11-04 14:43:24.514839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:25.634 103.00 IOPS, 309.00 MiB/s [2024-11-04T14:43:24.757Z] [2024-11-04 14:43:24.735566] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:17:25.634 [2024-11-04 14:43:24.735642] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.634 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.893 "name": "raid_bdev1", 00:17:25.893 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:25.893 "strip_size_kb": 0, 00:17:25.893 "state": "online", 00:17:25.893 "raid_level": "raid1", 00:17:25.893 "superblock": true, 00:17:25.893 "num_base_bdevs": 4, 00:17:25.893 "num_base_bdevs_discovered": 3, 00:17:25.893 "num_base_bdevs_operational": 3, 00:17:25.893 "process": { 00:17:25.893 "type": "rebuild", 00:17:25.893 "target": "spare", 00:17:25.893 "progress": { 00:17:25.893 "blocks": 12288, 00:17:25.893 "percent": 19 00:17:25.893 } 00:17:25.893 }, 00:17:25.893 "base_bdevs_list": [ 00:17:25.893 { 00:17:25.893 "name": "spare", 00:17:25.893 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:25.893 "is_configured": true, 00:17:25.893 "data_offset": 2048, 00:17:25.893 "data_size": 63488 00:17:25.893 }, 00:17:25.893 { 00:17:25.893 "name": null, 00:17:25.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.893 "is_configured": false, 00:17:25.893 "data_offset": 0, 00:17:25.893 "data_size": 63488 00:17:25.893 }, 00:17:25.893 { 00:17:25.893 "name": "BaseBdev3", 00:17:25.893 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:25.893 "is_configured": true, 00:17:25.893 "data_offset": 2048, 00:17:25.893 "data_size": 63488 00:17:25.893 }, 00:17:25.893 { 00:17:25.893 "name": "BaseBdev4", 00:17:25.893 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:25.893 "is_configured": true, 00:17:25.893 "data_offset": 2048, 00:17:25.893 "data_size": 63488 00:17:25.893 } 00:17:25.893 ] 00:17:25.893 }' 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.893 [2024-11-04 14:43:24.868521] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=537 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.893 "name": "raid_bdev1", 00:17:25.893 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:25.893 "strip_size_kb": 0, 00:17:25.893 "state": "online", 00:17:25.893 "raid_level": "raid1", 00:17:25.893 "superblock": true, 00:17:25.893 "num_base_bdevs": 4, 00:17:25.893 "num_base_bdevs_discovered": 3, 00:17:25.893 "num_base_bdevs_operational": 3, 00:17:25.893 "process": { 00:17:25.893 "type": "rebuild", 00:17:25.893 "target": "spare", 00:17:25.893 "progress": { 00:17:25.893 "blocks": 14336, 00:17:25.893 "percent": 22 00:17:25.893 } 00:17:25.893 }, 00:17:25.893 "base_bdevs_list": [ 00:17:25.893 { 00:17:25.893 "name": "spare", 00:17:25.893 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:25.893 "is_configured": true, 00:17:25.893 "data_offset": 2048, 00:17:25.893 "data_size": 63488 00:17:25.893 }, 00:17:25.893 { 00:17:25.893 "name": null, 00:17:25.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.893 "is_configured": false, 00:17:25.893 "data_offset": 0, 00:17:25.893 "data_size": 63488 00:17:25.893 }, 00:17:25.893 { 00:17:25.893 "name": "BaseBdev3", 00:17:25.893 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:25.893 "is_configured": true, 00:17:25.893 "data_offset": 2048, 00:17:25.893 "data_size": 63488 00:17:25.893 }, 00:17:25.893 { 00:17:25.893 "name": "BaseBdev4", 00:17:25.893 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:25.893 "is_configured": true, 00:17:25.893 "data_offset": 2048, 00:17:25.893 "data_size": 63488 00:17:25.893 } 00:17:25.893 ] 00:17:25.893 }' 00:17:25.893 14:43:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.151 [2024-11-04 14:43:25.018420] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:17:26.151 14:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.151 14:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.151 14:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.151 14:43:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:26.732 [2024-11-04 14:43:25.646205] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:17:26.990 98.40 IOPS, 295.20 MiB/s [2024-11-04T14:43:26.113Z] 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.990 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.990 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.990 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.990 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.990 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.990 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.990 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.990 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:26.990 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.990 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.249 [2024-11-04 14:43:26.140047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:17:27.249 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.249 "name": "raid_bdev1", 00:17:27.249 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:27.249 "strip_size_kb": 0, 00:17:27.249 "state": "online", 00:17:27.249 "raid_level": "raid1", 00:17:27.249 "superblock": true, 00:17:27.249 "num_base_bdevs": 4, 00:17:27.249 "num_base_bdevs_discovered": 3, 00:17:27.249 "num_base_bdevs_operational": 3, 00:17:27.249 "process": { 00:17:27.249 "type": "rebuild", 00:17:27.249 "target": "spare", 00:17:27.249 "progress": { 00:17:27.249 "blocks": 30720, 00:17:27.249 "percent": 48 00:17:27.249 } 00:17:27.249 }, 00:17:27.249 "base_bdevs_list": [ 00:17:27.249 { 00:17:27.249 "name": "spare", 00:17:27.249 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:27.249 "is_configured": true, 00:17:27.249 "data_offset": 2048, 00:17:27.249 "data_size": 63488 00:17:27.249 }, 00:17:27.249 { 00:17:27.249 "name": null, 00:17:27.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.249 "is_configured": false, 00:17:27.249 "data_offset": 0, 00:17:27.249 "data_size": 63488 00:17:27.249 }, 00:17:27.249 { 00:17:27.249 "name": "BaseBdev3", 00:17:27.249 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:27.249 "is_configured": true, 00:17:27.249 "data_offset": 2048, 00:17:27.249 "data_size": 63488 00:17:27.249 }, 00:17:27.249 { 00:17:27.249 "name": "BaseBdev4", 00:17:27.249 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:27.249 "is_configured": true, 00:17:27.249 "data_offset": 2048, 00:17:27.249 "data_size": 63488 00:17:27.249 } 00:17:27.249 ] 00:17:27.249 }' 00:17:27.249 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.249 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.249 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.249 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.249 14:43:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.249 [2024-11-04 14:43:26.261680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:27.817 89.83 IOPS, 269.50 MiB/s [2024-11-04T14:43:26.940Z] [2024-11-04 14:43:26.839496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.385 [2024-11-04 14:43:27.284561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.385 "name": "raid_bdev1", 00:17:28.385 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:28.385 "strip_size_kb": 0, 00:17:28.385 "state": "online", 00:17:28.385 "raid_level": "raid1", 00:17:28.385 "superblock": true, 00:17:28.385 "num_base_bdevs": 4, 00:17:28.385 "num_base_bdevs_discovered": 3, 00:17:28.385 "num_base_bdevs_operational": 3, 00:17:28.385 "process": { 00:17:28.385 "type": "rebuild", 00:17:28.385 "target": "spare", 00:17:28.385 "progress": { 00:17:28.385 "blocks": 49152, 00:17:28.385 "percent": 77 00:17:28.385 } 00:17:28.385 }, 00:17:28.385 "base_bdevs_list": [ 00:17:28.385 { 00:17:28.385 "name": "spare", 00:17:28.385 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:28.385 "is_configured": true, 00:17:28.385 "data_offset": 2048, 00:17:28.385 "data_size": 63488 00:17:28.385 }, 00:17:28.385 { 00:17:28.385 "name": null, 00:17:28.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.385 "is_configured": false, 00:17:28.385 "data_offset": 0, 00:17:28.385 "data_size": 63488 00:17:28.385 }, 00:17:28.385 { 00:17:28.385 "name": "BaseBdev3", 00:17:28.385 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:28.385 "is_configured": true, 00:17:28.385 "data_offset": 2048, 00:17:28.385 "data_size": 63488 00:17:28.385 }, 00:17:28.385 { 00:17:28.385 "name": "BaseBdev4", 00:17:28.385 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:28.385 "is_configured": true, 00:17:28.385 "data_offset": 2048, 00:17:28.385 "data_size": 63488 00:17:28.385 } 00:17:28.385 ] 00:17:28.385 }' 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.385 14:43:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.385 [2024-11-04 14:43:27.503985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:28.385 [2024-11-04 14:43:27.504601] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:17:29.227 81.86 IOPS, 245.57 MiB/s [2024-11-04T14:43:28.350Z] [2024-11-04 14:43:28.190966] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:29.227 [2024-11-04 14:43:28.290938] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:29.227 [2024-11-04 14:43:28.303052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.487 "name": "raid_bdev1", 00:17:29.487 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:29.487 "strip_size_kb": 0, 00:17:29.487 "state": "online", 00:17:29.487 "raid_level": "raid1", 00:17:29.487 "superblock": true, 00:17:29.487 "num_base_bdevs": 4, 00:17:29.487 "num_base_bdevs_discovered": 3, 00:17:29.487 "num_base_bdevs_operational": 3, 00:17:29.487 "base_bdevs_list": [ 00:17:29.487 { 00:17:29.487 "name": "spare", 00:17:29.487 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:29.487 "is_configured": true, 00:17:29.487 "data_offset": 2048, 00:17:29.487 "data_size": 63488 00:17:29.487 }, 00:17:29.487 { 00:17:29.487 "name": null, 00:17:29.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.487 "is_configured": false, 00:17:29.487 "data_offset": 0, 00:17:29.487 "data_size": 63488 00:17:29.487 }, 00:17:29.487 { 00:17:29.487 "name": "BaseBdev3", 00:17:29.487 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:29.487 "is_configured": true, 00:17:29.487 "data_offset": 2048, 00:17:29.487 "data_size": 63488 00:17:29.487 }, 00:17:29.487 { 00:17:29.487 "name": "BaseBdev4", 00:17:29.487 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:29.487 "is_configured": true, 00:17:29.487 "data_offset": 2048, 00:17:29.487 "data_size": 63488 00:17:29.487 } 00:17:29.487 ] 00:17:29.487 }' 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.487 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.747 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.747 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.747 "name": "raid_bdev1", 00:17:29.747 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:29.747 "strip_size_kb": 0, 00:17:29.747 "state": "online", 00:17:29.747 "raid_level": "raid1", 00:17:29.747 "superblock": true, 00:17:29.747 "num_base_bdevs": 4, 00:17:29.747 "num_base_bdevs_discovered": 3, 00:17:29.747 "num_base_bdevs_operational": 3, 00:17:29.747 "base_bdevs_list": [ 00:17:29.747 { 00:17:29.747 "name": "spare", 00:17:29.747 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:29.747 "is_configured": true, 00:17:29.747 "data_offset": 2048, 00:17:29.747 "data_size": 63488 00:17:29.747 }, 00:17:29.747 { 00:17:29.747 "name": null, 00:17:29.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.747 "is_configured": false, 00:17:29.747 "data_offset": 0, 00:17:29.747 "data_size": 63488 00:17:29.747 }, 00:17:29.747 { 00:17:29.747 "name": "BaseBdev3", 00:17:29.747 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:29.747 "is_configured": true, 00:17:29.747 "data_offset": 2048, 00:17:29.747 "data_size": 63488 00:17:29.748 }, 00:17:29.748 { 00:17:29.748 "name": "BaseBdev4", 00:17:29.748 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:29.748 "is_configured": true, 00:17:29.748 "data_offset": 2048, 00:17:29.748 "data_size": 63488 00:17:29.748 } 00:17:29.748 ] 00:17:29.748 }' 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.748 75.50 IOPS, 226.50 MiB/s [2024-11-04T14:43:28.871Z] 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.748 "name": "raid_bdev1", 00:17:29.748 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:29.748 "strip_size_kb": 0, 00:17:29.748 "state": "online", 00:17:29.748 "raid_level": "raid1", 00:17:29.748 "superblock": true, 00:17:29.748 "num_base_bdevs": 4, 00:17:29.748 "num_base_bdevs_discovered": 3, 00:17:29.748 "num_base_bdevs_operational": 3, 00:17:29.748 "base_bdevs_list": [ 00:17:29.748 { 00:17:29.748 "name": "spare", 00:17:29.748 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:29.748 "is_configured": true, 00:17:29.748 "data_offset": 2048, 00:17:29.748 "data_size": 63488 00:17:29.748 }, 00:17:29.748 { 00:17:29.748 "name": null, 00:17:29.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.748 "is_configured": false, 00:17:29.748 "data_offset": 0, 00:17:29.748 "data_size": 63488 00:17:29.748 }, 00:17:29.748 { 00:17:29.748 "name": "BaseBdev3", 00:17:29.748 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:29.748 "is_configured": true, 00:17:29.748 "data_offset": 2048, 00:17:29.748 "data_size": 63488 00:17:29.748 }, 00:17:29.748 { 00:17:29.748 "name": "BaseBdev4", 00:17:29.748 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:29.748 "is_configured": true, 00:17:29.748 "data_offset": 2048, 00:17:29.748 "data_size": 63488 00:17:29.748 } 00:17:29.748 ] 00:17:29.748 }' 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.748 14:43:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.316 [2024-11-04 14:43:29.292659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.316 [2024-11-04 14:43:29.292696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:30.316 00:17:30.316 Latency(us) 00:17:30.316 [2024-11-04T14:43:29.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.316 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:30.316 raid_bdev1 : 8.64 71.96 215.88 0.00 0.00 18411.80 279.27 124875.87 00:17:30.316 [2024-11-04T14:43:29.439Z] =================================================================================================================== 00:17:30.316 [2024-11-04T14:43:29.439Z] Total : 71.96 215.88 0.00 0.00 18411.80 279.27 124875.87 00:17:30.316 [2024-11-04 14:43:29.328766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.316 [2024-11-04 14:43:29.328840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:30.316 [2024-11-04 14:43:29.329021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:30.316 [2024-11-04 14:43:29.329043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:30.316 { 00:17:30.316 "results": [ 00:17:30.316 { 00:17:30.316 "job": "raid_bdev1", 00:17:30.316 "core_mask": "0x1", 00:17:30.316 "workload": "randrw", 00:17:30.316 "percentage": 50, 00:17:30.316 "status": "finished", 00:17:30.316 "queue_depth": 2, 00:17:30.316 "io_size": 3145728, 00:17:30.316 "runtime": 8.64352, 00:17:30.316 "iops": 71.96142312391248, 00:17:30.316 "mibps": 215.88426937173745, 00:17:30.316 "io_failed": 0, 00:17:30.316 "io_timeout": 0, 00:17:30.316 "avg_latency_us": 18411.804361297865, 00:17:30.316 "min_latency_us": 279.27272727272725, 00:17:30.316 "max_latency_us": 124875.8690909091 00:17:30.316 } 00:17:30.316 ], 00:17:30.316 "core_count": 1 00:17:30.316 } 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.316 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:30.883 /dev/nbd0 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.883 1+0 records in 00:17:30.883 1+0 records out 00:17:30.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309929 s, 13.2 MB/s 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.883 14:43:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:17:31.142 /dev/nbd1 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.142 1+0 records in 00:17:31.142 1+0 records out 00:17:31.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375956 s, 10.9 MB/s 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.142 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:31.401 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:31.401 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.401 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:31.401 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.401 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:31.401 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.401 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:31.660 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:31.660 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:31.660 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:31.660 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.661 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:17:31.920 /dev/nbd1 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.920 1+0 records in 00:17:31.920 1+0 records out 00:17:31.920 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388168 s, 10.6 MB/s 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.920 14:43:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:31.920 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:31.920 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.920 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:31.920 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.920 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:31.920 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.920 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:32.179 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.438 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.697 [2024-11-04 14:43:31.678298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:32.697 [2024-11-04 14:43:31.678367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.697 [2024-11-04 14:43:31.678398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:32.697 [2024-11-04 14:43:31.678413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.697 [2024-11-04 14:43:31.681470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.697 [2024-11-04 14:43:31.681522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:32.697 [2024-11-04 14:43:31.681649] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:32.697 [2024-11-04 14:43:31.681737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.697 [2024-11-04 14:43:31.681913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:32.697 [2024-11-04 14:43:31.682088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:32.697 spare 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.697 [2024-11-04 14:43:31.782232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:32.697 [2024-11-04 14:43:31.782293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:32.697 [2024-11-04 14:43:31.782757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:17:32.697 [2024-11-04 14:43:31.783032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:32.697 [2024-11-04 14:43:31.783067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:32.697 [2024-11-04 14:43:31.783322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:32.697 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.698 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.698 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.698 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.698 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.698 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.698 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.698 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:32.698 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.956 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.956 "name": "raid_bdev1", 00:17:32.956 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:32.956 "strip_size_kb": 0, 00:17:32.956 "state": "online", 00:17:32.956 "raid_level": "raid1", 00:17:32.956 "superblock": true, 00:17:32.956 "num_base_bdevs": 4, 00:17:32.956 "num_base_bdevs_discovered": 3, 00:17:32.956 "num_base_bdevs_operational": 3, 00:17:32.956 "base_bdevs_list": [ 00:17:32.956 { 00:17:32.956 "name": "spare", 00:17:32.956 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:32.956 "is_configured": true, 00:17:32.956 "data_offset": 2048, 00:17:32.956 "data_size": 63488 00:17:32.956 }, 00:17:32.956 { 00:17:32.956 "name": null, 00:17:32.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.956 "is_configured": false, 00:17:32.956 "data_offset": 2048, 00:17:32.956 "data_size": 63488 00:17:32.956 }, 00:17:32.956 { 00:17:32.956 "name": "BaseBdev3", 00:17:32.956 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:32.956 "is_configured": true, 00:17:32.956 "data_offset": 2048, 00:17:32.956 "data_size": 63488 00:17:32.956 }, 00:17:32.956 { 00:17:32.956 "name": "BaseBdev4", 00:17:32.956 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:32.956 "is_configured": true, 00:17:32.956 "data_offset": 2048, 00:17:32.956 "data_size": 63488 00:17:32.956 } 00:17:32.956 ] 00:17:32.956 }' 00:17:32.956 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.956 14:43:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.215 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.215 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.215 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.215 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.215 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.215 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.215 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.215 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.215 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.474 "name": "raid_bdev1", 00:17:33.474 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:33.474 "strip_size_kb": 0, 00:17:33.474 "state": "online", 00:17:33.474 "raid_level": "raid1", 00:17:33.474 "superblock": true, 00:17:33.474 "num_base_bdevs": 4, 00:17:33.474 "num_base_bdevs_discovered": 3, 00:17:33.474 "num_base_bdevs_operational": 3, 00:17:33.474 "base_bdevs_list": [ 00:17:33.474 { 00:17:33.474 "name": "spare", 00:17:33.474 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:33.474 "is_configured": true, 00:17:33.474 "data_offset": 2048, 00:17:33.474 "data_size": 63488 00:17:33.474 }, 00:17:33.474 { 00:17:33.474 "name": null, 00:17:33.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.474 "is_configured": false, 00:17:33.474 "data_offset": 2048, 00:17:33.474 "data_size": 63488 00:17:33.474 }, 00:17:33.474 { 00:17:33.474 "name": "BaseBdev3", 00:17:33.474 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:33.474 "is_configured": true, 00:17:33.474 "data_offset": 2048, 00:17:33.474 "data_size": 63488 00:17:33.474 }, 00:17:33.474 { 00:17:33.474 "name": "BaseBdev4", 00:17:33.474 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:33.474 "is_configured": true, 00:17:33.474 "data_offset": 2048, 00:17:33.474 "data_size": 63488 00:17:33.474 } 00:17:33.474 ] 00:17:33.474 }' 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.474 [2024-11-04 14:43:32.531708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.474 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.475 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.733 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.734 "name": "raid_bdev1", 00:17:33.734 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:33.734 "strip_size_kb": 0, 00:17:33.734 "state": "online", 00:17:33.734 "raid_level": "raid1", 00:17:33.734 "superblock": true, 00:17:33.734 "num_base_bdevs": 4, 00:17:33.734 "num_base_bdevs_discovered": 2, 00:17:33.734 "num_base_bdevs_operational": 2, 00:17:33.734 "base_bdevs_list": [ 00:17:33.734 { 00:17:33.734 "name": null, 00:17:33.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.734 "is_configured": false, 00:17:33.734 "data_offset": 0, 00:17:33.734 "data_size": 63488 00:17:33.734 }, 00:17:33.734 { 00:17:33.734 "name": null, 00:17:33.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.734 "is_configured": false, 00:17:33.734 "data_offset": 2048, 00:17:33.734 "data_size": 63488 00:17:33.734 }, 00:17:33.734 { 00:17:33.734 "name": "BaseBdev3", 00:17:33.734 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:33.734 "is_configured": true, 00:17:33.734 "data_offset": 2048, 00:17:33.734 "data_size": 63488 00:17:33.734 }, 00:17:33.734 { 00:17:33.734 "name": "BaseBdev4", 00:17:33.734 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:33.734 "is_configured": true, 00:17:33.734 "data_offset": 2048, 00:17:33.734 "data_size": 63488 00:17:33.734 } 00:17:33.734 ] 00:17:33.734 }' 00:17:33.734 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.734 14:43:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.993 14:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:33.993 14:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.993 14:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:33.993 [2024-11-04 14:43:33.072050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.993 [2024-11-04 14:43:33.072285] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:33.993 [2024-11-04 14:43:33.072307] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:33.993 [2024-11-04 14:43:33.072352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.993 [2024-11-04 14:43:33.086951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:17:33.993 14:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.993 14:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:33.993 [2024-11-04 14:43:33.089360] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.402 "name": "raid_bdev1", 00:17:35.402 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:35.402 "strip_size_kb": 0, 00:17:35.402 "state": "online", 00:17:35.402 "raid_level": "raid1", 00:17:35.402 "superblock": true, 00:17:35.402 "num_base_bdevs": 4, 00:17:35.402 "num_base_bdevs_discovered": 3, 00:17:35.402 "num_base_bdevs_operational": 3, 00:17:35.402 "process": { 00:17:35.402 "type": "rebuild", 00:17:35.402 "target": "spare", 00:17:35.402 "progress": { 00:17:35.402 "blocks": 20480, 00:17:35.402 "percent": 32 00:17:35.402 } 00:17:35.402 }, 00:17:35.402 "base_bdevs_list": [ 00:17:35.402 { 00:17:35.402 "name": "spare", 00:17:35.402 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:35.402 "is_configured": true, 00:17:35.402 "data_offset": 2048, 00:17:35.402 "data_size": 63488 00:17:35.402 }, 00:17:35.402 { 00:17:35.402 "name": null, 00:17:35.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.402 "is_configured": false, 00:17:35.402 "data_offset": 2048, 00:17:35.402 "data_size": 63488 00:17:35.402 }, 00:17:35.402 { 00:17:35.402 "name": "BaseBdev3", 00:17:35.402 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:35.402 "is_configured": true, 00:17:35.402 "data_offset": 2048, 00:17:35.402 "data_size": 63488 00:17:35.402 }, 00:17:35.402 { 00:17:35.402 "name": "BaseBdev4", 00:17:35.402 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:35.402 "is_configured": true, 00:17:35.402 "data_offset": 2048, 00:17:35.402 "data_size": 63488 00:17:35.402 } 00:17:35.402 ] 00:17:35.402 }' 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.402 [2024-11-04 14:43:34.255295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.402 [2024-11-04 14:43:34.298390] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:35.402 [2024-11-04 14:43:34.298475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.402 [2024-11-04 14:43:34.298502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.402 [2024-11-04 14:43:34.298513] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.402 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.402 "name": "raid_bdev1", 00:17:35.402 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:35.402 "strip_size_kb": 0, 00:17:35.402 "state": "online", 00:17:35.402 "raid_level": "raid1", 00:17:35.402 "superblock": true, 00:17:35.402 "num_base_bdevs": 4, 00:17:35.402 "num_base_bdevs_discovered": 2, 00:17:35.402 "num_base_bdevs_operational": 2, 00:17:35.402 "base_bdevs_list": [ 00:17:35.402 { 00:17:35.402 "name": null, 00:17:35.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.403 "is_configured": false, 00:17:35.403 "data_offset": 0, 00:17:35.403 "data_size": 63488 00:17:35.403 }, 00:17:35.403 { 00:17:35.403 "name": null, 00:17:35.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.403 "is_configured": false, 00:17:35.403 "data_offset": 2048, 00:17:35.403 "data_size": 63488 00:17:35.403 }, 00:17:35.403 { 00:17:35.403 "name": "BaseBdev3", 00:17:35.403 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:35.403 "is_configured": true, 00:17:35.403 "data_offset": 2048, 00:17:35.403 "data_size": 63488 00:17:35.403 }, 00:17:35.403 { 00:17:35.403 "name": "BaseBdev4", 00:17:35.403 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:35.403 "is_configured": true, 00:17:35.403 "data_offset": 2048, 00:17:35.403 "data_size": 63488 00:17:35.403 } 00:17:35.403 ] 00:17:35.403 }' 00:17:35.403 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.403 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.970 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:35.970 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.970 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:35.970 [2024-11-04 14:43:34.842021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:35.970 [2024-11-04 14:43:34.842099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.970 [2024-11-04 14:43:34.842136] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:35.970 [2024-11-04 14:43:34.842153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.970 [2024-11-04 14:43:34.842802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.970 [2024-11-04 14:43:34.842833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:35.970 [2024-11-04 14:43:34.842968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:35.970 [2024-11-04 14:43:34.842989] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:17:35.970 [2024-11-04 14:43:34.843008] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:35.970 [2024-11-04 14:43:34.843037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.970 [2024-11-04 14:43:34.857415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:17:35.970 spare 00:17:35.970 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.970 14:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:35.970 [2024-11-04 14:43:34.860007] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.906 "name": "raid_bdev1", 00:17:36.906 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:36.906 "strip_size_kb": 0, 00:17:36.906 "state": "online", 00:17:36.906 "raid_level": "raid1", 00:17:36.906 "superblock": true, 00:17:36.906 "num_base_bdevs": 4, 00:17:36.906 "num_base_bdevs_discovered": 3, 00:17:36.906 "num_base_bdevs_operational": 3, 00:17:36.906 "process": { 00:17:36.906 "type": "rebuild", 00:17:36.906 "target": "spare", 00:17:36.906 "progress": { 00:17:36.906 "blocks": 20480, 00:17:36.906 "percent": 32 00:17:36.906 } 00:17:36.906 }, 00:17:36.906 "base_bdevs_list": [ 00:17:36.906 { 00:17:36.906 "name": "spare", 00:17:36.906 "uuid": "0cfc6787-52e6-5aea-b450-2ef9c8853332", 00:17:36.906 "is_configured": true, 00:17:36.906 "data_offset": 2048, 00:17:36.906 "data_size": 63488 00:17:36.906 }, 00:17:36.906 { 00:17:36.906 "name": null, 00:17:36.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.906 "is_configured": false, 00:17:36.906 "data_offset": 2048, 00:17:36.906 "data_size": 63488 00:17:36.906 }, 00:17:36.906 { 00:17:36.906 "name": "BaseBdev3", 00:17:36.906 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:36.906 "is_configured": true, 00:17:36.906 "data_offset": 2048, 00:17:36.906 "data_size": 63488 00:17:36.906 }, 00:17:36.906 { 00:17:36.906 "name": "BaseBdev4", 00:17:36.906 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:36.906 "is_configured": true, 00:17:36.906 "data_offset": 2048, 00:17:36.906 "data_size": 63488 00:17:36.906 } 00:17:36.906 ] 00:17:36.906 }' 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.906 14:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.906 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.906 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:36.906 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.906 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:36.906 [2024-11-04 14:43:36.025323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.166 [2024-11-04 14:43:36.069184] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.166 [2024-11-04 14:43:36.069284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.166 [2024-11-04 14:43:36.069309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.166 [2024-11-04 14:43:36.069326] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.166 "name": "raid_bdev1", 00:17:37.166 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:37.166 "strip_size_kb": 0, 00:17:37.166 "state": "online", 00:17:37.166 "raid_level": "raid1", 00:17:37.166 "superblock": true, 00:17:37.166 "num_base_bdevs": 4, 00:17:37.166 "num_base_bdevs_discovered": 2, 00:17:37.166 "num_base_bdevs_operational": 2, 00:17:37.166 "base_bdevs_list": [ 00:17:37.166 { 00:17:37.166 "name": null, 00:17:37.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.166 "is_configured": false, 00:17:37.166 "data_offset": 0, 00:17:37.166 "data_size": 63488 00:17:37.166 }, 00:17:37.166 { 00:17:37.166 "name": null, 00:17:37.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.166 "is_configured": false, 00:17:37.166 "data_offset": 2048, 00:17:37.166 "data_size": 63488 00:17:37.166 }, 00:17:37.166 { 00:17:37.166 "name": "BaseBdev3", 00:17:37.166 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:37.166 "is_configured": true, 00:17:37.166 "data_offset": 2048, 00:17:37.166 "data_size": 63488 00:17:37.166 }, 00:17:37.166 { 00:17:37.166 "name": "BaseBdev4", 00:17:37.166 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:37.166 "is_configured": true, 00:17:37.166 "data_offset": 2048, 00:17:37.166 "data_size": 63488 00:17:37.166 } 00:17:37.166 ] 00:17:37.166 }' 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.166 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.734 "name": "raid_bdev1", 00:17:37.734 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:37.734 "strip_size_kb": 0, 00:17:37.734 "state": "online", 00:17:37.734 "raid_level": "raid1", 00:17:37.734 "superblock": true, 00:17:37.734 "num_base_bdevs": 4, 00:17:37.734 "num_base_bdevs_discovered": 2, 00:17:37.734 "num_base_bdevs_operational": 2, 00:17:37.734 "base_bdevs_list": [ 00:17:37.734 { 00:17:37.734 "name": null, 00:17:37.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.734 "is_configured": false, 00:17:37.734 "data_offset": 0, 00:17:37.734 "data_size": 63488 00:17:37.734 }, 00:17:37.734 { 00:17:37.734 "name": null, 00:17:37.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.734 "is_configured": false, 00:17:37.734 "data_offset": 2048, 00:17:37.734 "data_size": 63488 00:17:37.734 }, 00:17:37.734 { 00:17:37.734 "name": "BaseBdev3", 00:17:37.734 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:37.734 "is_configured": true, 00:17:37.734 "data_offset": 2048, 00:17:37.734 "data_size": 63488 00:17:37.734 }, 00:17:37.734 { 00:17:37.734 "name": "BaseBdev4", 00:17:37.734 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:37.734 "is_configured": true, 00:17:37.734 "data_offset": 2048, 00:17:37.734 "data_size": 63488 00:17:37.734 } 00:17:37.734 ] 00:17:37.734 }' 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:37.734 [2024-11-04 14:43:36.789029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:37.734 [2024-11-04 14:43:36.789101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.734 [2024-11-04 14:43:36.789127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:17:37.734 [2024-11-04 14:43:36.789145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.734 [2024-11-04 14:43:36.789721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.734 [2024-11-04 14:43:36.789764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:37.734 [2024-11-04 14:43:36.789862] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:37.734 [2024-11-04 14:43:36.789892] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:37.734 [2024-11-04 14:43:36.789905] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:37.734 [2024-11-04 14:43:36.789923] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:37.734 BaseBdev1 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.734 14:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.110 "name": "raid_bdev1", 00:17:39.110 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:39.110 "strip_size_kb": 0, 00:17:39.110 "state": "online", 00:17:39.110 "raid_level": "raid1", 00:17:39.110 "superblock": true, 00:17:39.110 "num_base_bdevs": 4, 00:17:39.110 "num_base_bdevs_discovered": 2, 00:17:39.110 "num_base_bdevs_operational": 2, 00:17:39.110 "base_bdevs_list": [ 00:17:39.110 { 00:17:39.110 "name": null, 00:17:39.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.110 "is_configured": false, 00:17:39.110 "data_offset": 0, 00:17:39.110 "data_size": 63488 00:17:39.110 }, 00:17:39.110 { 00:17:39.110 "name": null, 00:17:39.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.110 "is_configured": false, 00:17:39.110 "data_offset": 2048, 00:17:39.110 "data_size": 63488 00:17:39.110 }, 00:17:39.110 { 00:17:39.110 "name": "BaseBdev3", 00:17:39.110 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:39.110 "is_configured": true, 00:17:39.110 "data_offset": 2048, 00:17:39.110 "data_size": 63488 00:17:39.110 }, 00:17:39.110 { 00:17:39.110 "name": "BaseBdev4", 00:17:39.110 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:39.110 "is_configured": true, 00:17:39.110 "data_offset": 2048, 00:17:39.110 "data_size": 63488 00:17:39.110 } 00:17:39.110 ] 00:17:39.110 }' 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.110 14:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.370 "name": "raid_bdev1", 00:17:39.370 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:39.370 "strip_size_kb": 0, 00:17:39.370 "state": "online", 00:17:39.370 "raid_level": "raid1", 00:17:39.370 "superblock": true, 00:17:39.370 "num_base_bdevs": 4, 00:17:39.370 "num_base_bdevs_discovered": 2, 00:17:39.370 "num_base_bdevs_operational": 2, 00:17:39.370 "base_bdevs_list": [ 00:17:39.370 { 00:17:39.370 "name": null, 00:17:39.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.370 "is_configured": false, 00:17:39.370 "data_offset": 0, 00:17:39.370 "data_size": 63488 00:17:39.370 }, 00:17:39.370 { 00:17:39.370 "name": null, 00:17:39.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.370 "is_configured": false, 00:17:39.370 "data_offset": 2048, 00:17:39.370 "data_size": 63488 00:17:39.370 }, 00:17:39.370 { 00:17:39.370 "name": "BaseBdev3", 00:17:39.370 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:39.370 "is_configured": true, 00:17:39.370 "data_offset": 2048, 00:17:39.370 "data_size": 63488 00:17:39.370 }, 00:17:39.370 { 00:17:39.370 "name": "BaseBdev4", 00:17:39.370 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:39.370 "is_configured": true, 00:17:39.370 "data_offset": 2048, 00:17:39.370 "data_size": 63488 00:17:39.370 } 00:17:39.370 ] 00:17:39.370 }' 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.370 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:39.630 [2024-11-04 14:43:38.537947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.630 [2024-11-04 14:43:38.538158] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:17:39.630 [2024-11-04 14:43:38.538190] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:39.630 request: 00:17:39.630 { 00:17:39.630 "base_bdev": "BaseBdev1", 00:17:39.630 "raid_bdev": "raid_bdev1", 00:17:39.630 "method": "bdev_raid_add_base_bdev", 00:17:39.630 "req_id": 1 00:17:39.630 } 00:17:39.630 Got JSON-RPC error response 00:17:39.630 response: 00:17:39.630 { 00:17:39.630 "code": -22, 00:17:39.630 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:39.630 } 00:17:39.630 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:39.631 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:17:39.631 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:39.631 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:39.631 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:39.631 14:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.614 "name": "raid_bdev1", 00:17:40.614 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:40.614 "strip_size_kb": 0, 00:17:40.614 "state": "online", 00:17:40.614 "raid_level": "raid1", 00:17:40.614 "superblock": true, 00:17:40.614 "num_base_bdevs": 4, 00:17:40.614 "num_base_bdevs_discovered": 2, 00:17:40.614 "num_base_bdevs_operational": 2, 00:17:40.614 "base_bdevs_list": [ 00:17:40.614 { 00:17:40.614 "name": null, 00:17:40.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.614 "is_configured": false, 00:17:40.614 "data_offset": 0, 00:17:40.614 "data_size": 63488 00:17:40.614 }, 00:17:40.614 { 00:17:40.614 "name": null, 00:17:40.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.614 "is_configured": false, 00:17:40.614 "data_offset": 2048, 00:17:40.614 "data_size": 63488 00:17:40.614 }, 00:17:40.614 { 00:17:40.614 "name": "BaseBdev3", 00:17:40.614 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:40.614 "is_configured": true, 00:17:40.614 "data_offset": 2048, 00:17:40.614 "data_size": 63488 00:17:40.614 }, 00:17:40.614 { 00:17:40.614 "name": "BaseBdev4", 00:17:40.614 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:40.614 "is_configured": true, 00:17:40.614 "data_offset": 2048, 00:17:40.614 "data_size": 63488 00:17:40.614 } 00:17:40.614 ] 00:17:40.614 }' 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.614 14:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.199 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.199 "name": "raid_bdev1", 00:17:41.199 "uuid": "9f0d6b87-3129-47ec-9c4a-54a810aaa4de", 00:17:41.199 "strip_size_kb": 0, 00:17:41.199 "state": "online", 00:17:41.199 "raid_level": "raid1", 00:17:41.199 "superblock": true, 00:17:41.199 "num_base_bdevs": 4, 00:17:41.199 "num_base_bdevs_discovered": 2, 00:17:41.199 "num_base_bdevs_operational": 2, 00:17:41.199 "base_bdevs_list": [ 00:17:41.199 { 00:17:41.199 "name": null, 00:17:41.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.199 "is_configured": false, 00:17:41.199 "data_offset": 0, 00:17:41.199 "data_size": 63488 00:17:41.199 }, 00:17:41.199 { 00:17:41.199 "name": null, 00:17:41.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.199 "is_configured": false, 00:17:41.199 "data_offset": 2048, 00:17:41.199 "data_size": 63488 00:17:41.199 }, 00:17:41.199 { 00:17:41.199 "name": "BaseBdev3", 00:17:41.199 "uuid": "1311c2d6-c68f-5e85-88ba-4e73bb0fa6d0", 00:17:41.199 "is_configured": true, 00:17:41.199 "data_offset": 2048, 00:17:41.199 "data_size": 63488 00:17:41.199 }, 00:17:41.199 { 00:17:41.200 "name": "BaseBdev4", 00:17:41.200 "uuid": "a5dffaa3-4660-5e22-8649-328aef6a27cd", 00:17:41.200 "is_configured": true, 00:17:41.200 "data_offset": 2048, 00:17:41.200 "data_size": 63488 00:17:41.200 } 00:17:41.200 ] 00:17:41.200 }' 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79486 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79486 ']' 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79486 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79486 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:41.200 killing process with pid 79486 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79486' 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79486 00:17:41.200 Received shutdown signal, test time was about 19.612996 seconds 00:17:41.200 00:17:41.200 Latency(us) 00:17:41.200 [2024-11-04T14:43:40.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.200 [2024-11-04T14:43:40.323Z] =================================================================================================================== 00:17:41.200 [2024-11-04T14:43:40.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.200 [2024-11-04 14:43:40.278069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:41.200 14:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79486 00:17:41.200 [2024-11-04 14:43:40.278227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.200 [2024-11-04 14:43:40.278322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.200 [2024-11-04 14:43:40.278338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:41.766 [2024-11-04 14:43:40.662136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:42.700 14:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:42.700 00:17:42.700 real 0m23.302s 00:17:42.700 user 0m31.857s 00:17:42.700 sys 0m2.406s 00:17:42.700 14:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:42.700 ************************************ 00:17:42.700 END TEST raid_rebuild_test_sb_io 00:17:42.700 ************************************ 00:17:42.700 14:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:42.700 14:43:41 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:42.700 14:43:41 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:17:42.700 14:43:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:42.700 14:43:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:42.700 14:43:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:42.959 ************************************ 00:17:42.959 START TEST raid5f_state_function_test 00:17:42.959 ************************************ 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80226 00:17:42.959 Process raid pid: 80226 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80226' 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80226 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80226 ']' 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:42.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:42.959 14:43:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.959 [2024-11-04 14:43:41.959856] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:17:42.959 [2024-11-04 14:43:41.960061] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.217 [2024-11-04 14:43:42.149145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.217 [2024-11-04 14:43:42.313011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.475 [2024-11-04 14:43:42.539139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.475 [2024-11-04 14:43:42.539200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.041 [2024-11-04 14:43:42.968452] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.041 [2024-11-04 14:43:42.968510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.041 [2024-11-04 14:43:42.968526] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.041 [2024-11-04 14:43:42.968541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.041 [2024-11-04 14:43:42.968550] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.041 [2024-11-04 14:43:42.968563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.041 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.042 14:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.042 14:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.042 14:43:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.042 14:43:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.042 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.042 "name": "Existed_Raid", 00:17:44.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.042 "strip_size_kb": 64, 00:17:44.042 "state": "configuring", 00:17:44.042 "raid_level": "raid5f", 00:17:44.042 "superblock": false, 00:17:44.042 "num_base_bdevs": 3, 00:17:44.042 "num_base_bdevs_discovered": 0, 00:17:44.042 "num_base_bdevs_operational": 3, 00:17:44.042 "base_bdevs_list": [ 00:17:44.042 { 00:17:44.042 "name": "BaseBdev1", 00:17:44.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.042 "is_configured": false, 00:17:44.042 "data_offset": 0, 00:17:44.042 "data_size": 0 00:17:44.042 }, 00:17:44.042 { 00:17:44.042 "name": "BaseBdev2", 00:17:44.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.042 "is_configured": false, 00:17:44.042 "data_offset": 0, 00:17:44.042 "data_size": 0 00:17:44.042 }, 00:17:44.042 { 00:17:44.042 "name": "BaseBdev3", 00:17:44.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.042 "is_configured": false, 00:17:44.042 "data_offset": 0, 00:17:44.042 "data_size": 0 00:17:44.042 } 00:17:44.042 ] 00:17:44.042 }' 00:17:44.042 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.042 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.609 [2024-11-04 14:43:43.508562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.609 [2024-11-04 14:43:43.508625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.609 [2024-11-04 14:43:43.516561] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.609 [2024-11-04 14:43:43.516629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.609 [2024-11-04 14:43:43.516643] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.609 [2024-11-04 14:43:43.516658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.609 [2024-11-04 14:43:43.516667] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.609 [2024-11-04 14:43:43.516680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.609 [2024-11-04 14:43:43.562783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.609 BaseBdev1 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.609 [ 00:17:44.609 { 00:17:44.609 "name": "BaseBdev1", 00:17:44.609 "aliases": [ 00:17:44.609 "b77c96a3-780d-4165-a556-869ba1e0f6c2" 00:17:44.609 ], 00:17:44.609 "product_name": "Malloc disk", 00:17:44.609 "block_size": 512, 00:17:44.609 "num_blocks": 65536, 00:17:44.609 "uuid": "b77c96a3-780d-4165-a556-869ba1e0f6c2", 00:17:44.609 "assigned_rate_limits": { 00:17:44.609 "rw_ios_per_sec": 0, 00:17:44.609 "rw_mbytes_per_sec": 0, 00:17:44.609 "r_mbytes_per_sec": 0, 00:17:44.609 "w_mbytes_per_sec": 0 00:17:44.609 }, 00:17:44.609 "claimed": true, 00:17:44.609 "claim_type": "exclusive_write", 00:17:44.609 "zoned": false, 00:17:44.609 "supported_io_types": { 00:17:44.609 "read": true, 00:17:44.609 "write": true, 00:17:44.609 "unmap": true, 00:17:44.609 "flush": true, 00:17:44.609 "reset": true, 00:17:44.609 "nvme_admin": false, 00:17:44.609 "nvme_io": false, 00:17:44.609 "nvme_io_md": false, 00:17:44.609 "write_zeroes": true, 00:17:44.609 "zcopy": true, 00:17:44.609 "get_zone_info": false, 00:17:44.609 "zone_management": false, 00:17:44.609 "zone_append": false, 00:17:44.609 "compare": false, 00:17:44.609 "compare_and_write": false, 00:17:44.609 "abort": true, 00:17:44.609 "seek_hole": false, 00:17:44.609 "seek_data": false, 00:17:44.609 "copy": true, 00:17:44.609 "nvme_iov_md": false 00:17:44.609 }, 00:17:44.609 "memory_domains": [ 00:17:44.609 { 00:17:44.609 "dma_device_id": "system", 00:17:44.609 "dma_device_type": 1 00:17:44.609 }, 00:17:44.609 { 00:17:44.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.609 "dma_device_type": 2 00:17:44.609 } 00:17:44.609 ], 00:17:44.609 "driver_specific": {} 00:17:44.609 } 00:17:44.609 ] 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.609 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.609 "name": "Existed_Raid", 00:17:44.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.609 "strip_size_kb": 64, 00:17:44.609 "state": "configuring", 00:17:44.609 "raid_level": "raid5f", 00:17:44.609 "superblock": false, 00:17:44.609 "num_base_bdevs": 3, 00:17:44.609 "num_base_bdevs_discovered": 1, 00:17:44.609 "num_base_bdevs_operational": 3, 00:17:44.609 "base_bdevs_list": [ 00:17:44.609 { 00:17:44.609 "name": "BaseBdev1", 00:17:44.609 "uuid": "b77c96a3-780d-4165-a556-869ba1e0f6c2", 00:17:44.609 "is_configured": true, 00:17:44.609 "data_offset": 0, 00:17:44.609 "data_size": 65536 00:17:44.609 }, 00:17:44.609 { 00:17:44.609 "name": "BaseBdev2", 00:17:44.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.610 "is_configured": false, 00:17:44.610 "data_offset": 0, 00:17:44.610 "data_size": 0 00:17:44.610 }, 00:17:44.610 { 00:17:44.610 "name": "BaseBdev3", 00:17:44.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.610 "is_configured": false, 00:17:44.610 "data_offset": 0, 00:17:44.610 "data_size": 0 00:17:44.610 } 00:17:44.610 ] 00:17:44.610 }' 00:17:44.610 14:43:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.610 14:43:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.176 [2024-11-04 14:43:44.127050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.176 [2024-11-04 14:43:44.127118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.176 [2024-11-04 14:43:44.135079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.176 [2024-11-04 14:43:44.137589] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.176 [2024-11-04 14:43:44.137656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.176 [2024-11-04 14:43:44.137671] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:45.176 [2024-11-04 14:43:44.137686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.176 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.176 "name": "Existed_Raid", 00:17:45.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.176 "strip_size_kb": 64, 00:17:45.177 "state": "configuring", 00:17:45.177 "raid_level": "raid5f", 00:17:45.177 "superblock": false, 00:17:45.177 "num_base_bdevs": 3, 00:17:45.177 "num_base_bdevs_discovered": 1, 00:17:45.177 "num_base_bdevs_operational": 3, 00:17:45.177 "base_bdevs_list": [ 00:17:45.177 { 00:17:45.177 "name": "BaseBdev1", 00:17:45.177 "uuid": "b77c96a3-780d-4165-a556-869ba1e0f6c2", 00:17:45.177 "is_configured": true, 00:17:45.177 "data_offset": 0, 00:17:45.177 "data_size": 65536 00:17:45.177 }, 00:17:45.177 { 00:17:45.177 "name": "BaseBdev2", 00:17:45.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.177 "is_configured": false, 00:17:45.177 "data_offset": 0, 00:17:45.177 "data_size": 0 00:17:45.177 }, 00:17:45.177 { 00:17:45.177 "name": "BaseBdev3", 00:17:45.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.177 "is_configured": false, 00:17:45.177 "data_offset": 0, 00:17:45.177 "data_size": 0 00:17:45.177 } 00:17:45.177 ] 00:17:45.177 }' 00:17:45.177 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.177 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.745 [2024-11-04 14:43:44.710449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.745 BaseBdev2 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.745 [ 00:17:45.745 { 00:17:45.745 "name": "BaseBdev2", 00:17:45.745 "aliases": [ 00:17:45.745 "be2c8174-3705-43ab-bef1-9af845c8dd9d" 00:17:45.745 ], 00:17:45.745 "product_name": "Malloc disk", 00:17:45.745 "block_size": 512, 00:17:45.745 "num_blocks": 65536, 00:17:45.745 "uuid": "be2c8174-3705-43ab-bef1-9af845c8dd9d", 00:17:45.745 "assigned_rate_limits": { 00:17:45.745 "rw_ios_per_sec": 0, 00:17:45.745 "rw_mbytes_per_sec": 0, 00:17:45.745 "r_mbytes_per_sec": 0, 00:17:45.745 "w_mbytes_per_sec": 0 00:17:45.745 }, 00:17:45.745 "claimed": true, 00:17:45.745 "claim_type": "exclusive_write", 00:17:45.745 "zoned": false, 00:17:45.745 "supported_io_types": { 00:17:45.745 "read": true, 00:17:45.745 "write": true, 00:17:45.745 "unmap": true, 00:17:45.745 "flush": true, 00:17:45.745 "reset": true, 00:17:45.745 "nvme_admin": false, 00:17:45.745 "nvme_io": false, 00:17:45.745 "nvme_io_md": false, 00:17:45.745 "write_zeroes": true, 00:17:45.745 "zcopy": true, 00:17:45.745 "get_zone_info": false, 00:17:45.745 "zone_management": false, 00:17:45.745 "zone_append": false, 00:17:45.745 "compare": false, 00:17:45.745 "compare_and_write": false, 00:17:45.745 "abort": true, 00:17:45.745 "seek_hole": false, 00:17:45.745 "seek_data": false, 00:17:45.745 "copy": true, 00:17:45.745 "nvme_iov_md": false 00:17:45.745 }, 00:17:45.745 "memory_domains": [ 00:17:45.745 { 00:17:45.745 "dma_device_id": "system", 00:17:45.745 "dma_device_type": 1 00:17:45.745 }, 00:17:45.745 { 00:17:45.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.745 "dma_device_type": 2 00:17:45.745 } 00:17:45.745 ], 00:17:45.745 "driver_specific": {} 00:17:45.745 } 00:17:45.745 ] 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.745 "name": "Existed_Raid", 00:17:45.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.745 "strip_size_kb": 64, 00:17:45.745 "state": "configuring", 00:17:45.745 "raid_level": "raid5f", 00:17:45.745 "superblock": false, 00:17:45.745 "num_base_bdevs": 3, 00:17:45.745 "num_base_bdevs_discovered": 2, 00:17:45.745 "num_base_bdevs_operational": 3, 00:17:45.745 "base_bdevs_list": [ 00:17:45.745 { 00:17:45.745 "name": "BaseBdev1", 00:17:45.745 "uuid": "b77c96a3-780d-4165-a556-869ba1e0f6c2", 00:17:45.745 "is_configured": true, 00:17:45.745 "data_offset": 0, 00:17:45.745 "data_size": 65536 00:17:45.745 }, 00:17:45.745 { 00:17:45.745 "name": "BaseBdev2", 00:17:45.745 "uuid": "be2c8174-3705-43ab-bef1-9af845c8dd9d", 00:17:45.745 "is_configured": true, 00:17:45.745 "data_offset": 0, 00:17:45.745 "data_size": 65536 00:17:45.745 }, 00:17:45.745 { 00:17:45.745 "name": "BaseBdev3", 00:17:45.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.745 "is_configured": false, 00:17:45.745 "data_offset": 0, 00:17:45.745 "data_size": 0 00:17:45.745 } 00:17:45.745 ] 00:17:45.745 }' 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.745 14:43:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.313 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:46.313 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.313 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.313 [2024-11-04 14:43:45.326756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.313 [2024-11-04 14:43:45.326850] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:46.313 [2024-11-04 14:43:45.326870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:46.313 [2024-11-04 14:43:45.327279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:46.313 [2024-11-04 14:43:45.332713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:46.313 [2024-11-04 14:43:45.332744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:46.313 [2024-11-04 14:43:45.333158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.313 BaseBdev3 00:17:46.313 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.314 [ 00:17:46.314 { 00:17:46.314 "name": "BaseBdev3", 00:17:46.314 "aliases": [ 00:17:46.314 "de4c4db0-06ef-4db1-93e7-d80facef0d82" 00:17:46.314 ], 00:17:46.314 "product_name": "Malloc disk", 00:17:46.314 "block_size": 512, 00:17:46.314 "num_blocks": 65536, 00:17:46.314 "uuid": "de4c4db0-06ef-4db1-93e7-d80facef0d82", 00:17:46.314 "assigned_rate_limits": { 00:17:46.314 "rw_ios_per_sec": 0, 00:17:46.314 "rw_mbytes_per_sec": 0, 00:17:46.314 "r_mbytes_per_sec": 0, 00:17:46.314 "w_mbytes_per_sec": 0 00:17:46.314 }, 00:17:46.314 "claimed": true, 00:17:46.314 "claim_type": "exclusive_write", 00:17:46.314 "zoned": false, 00:17:46.314 "supported_io_types": { 00:17:46.314 "read": true, 00:17:46.314 "write": true, 00:17:46.314 "unmap": true, 00:17:46.314 "flush": true, 00:17:46.314 "reset": true, 00:17:46.314 "nvme_admin": false, 00:17:46.314 "nvme_io": false, 00:17:46.314 "nvme_io_md": false, 00:17:46.314 "write_zeroes": true, 00:17:46.314 "zcopy": true, 00:17:46.314 "get_zone_info": false, 00:17:46.314 "zone_management": false, 00:17:46.314 "zone_append": false, 00:17:46.314 "compare": false, 00:17:46.314 "compare_and_write": false, 00:17:46.314 "abort": true, 00:17:46.314 "seek_hole": false, 00:17:46.314 "seek_data": false, 00:17:46.314 "copy": true, 00:17:46.314 "nvme_iov_md": false 00:17:46.314 }, 00:17:46.314 "memory_domains": [ 00:17:46.314 { 00:17:46.314 "dma_device_id": "system", 00:17:46.314 "dma_device_type": 1 00:17:46.314 }, 00:17:46.314 { 00:17:46.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.314 "dma_device_type": 2 00:17:46.314 } 00:17:46.314 ], 00:17:46.314 "driver_specific": {} 00:17:46.314 } 00:17:46.314 ] 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.314 "name": "Existed_Raid", 00:17:46.314 "uuid": "bb142ef9-bfbe-4213-a27d-230d2e437751", 00:17:46.314 "strip_size_kb": 64, 00:17:46.314 "state": "online", 00:17:46.314 "raid_level": "raid5f", 00:17:46.314 "superblock": false, 00:17:46.314 "num_base_bdevs": 3, 00:17:46.314 "num_base_bdevs_discovered": 3, 00:17:46.314 "num_base_bdevs_operational": 3, 00:17:46.314 "base_bdevs_list": [ 00:17:46.314 { 00:17:46.314 "name": "BaseBdev1", 00:17:46.314 "uuid": "b77c96a3-780d-4165-a556-869ba1e0f6c2", 00:17:46.314 "is_configured": true, 00:17:46.314 "data_offset": 0, 00:17:46.314 "data_size": 65536 00:17:46.314 }, 00:17:46.314 { 00:17:46.314 "name": "BaseBdev2", 00:17:46.314 "uuid": "be2c8174-3705-43ab-bef1-9af845c8dd9d", 00:17:46.314 "is_configured": true, 00:17:46.314 "data_offset": 0, 00:17:46.314 "data_size": 65536 00:17:46.314 }, 00:17:46.314 { 00:17:46.314 "name": "BaseBdev3", 00:17:46.314 "uuid": "de4c4db0-06ef-4db1-93e7-d80facef0d82", 00:17:46.314 "is_configured": true, 00:17:46.314 "data_offset": 0, 00:17:46.314 "data_size": 65536 00:17:46.314 } 00:17:46.314 ] 00:17:46.314 }' 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.314 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:46.907 [2024-11-04 14:43:45.891272] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:46.907 "name": "Existed_Raid", 00:17:46.907 "aliases": [ 00:17:46.907 "bb142ef9-bfbe-4213-a27d-230d2e437751" 00:17:46.907 ], 00:17:46.907 "product_name": "Raid Volume", 00:17:46.907 "block_size": 512, 00:17:46.907 "num_blocks": 131072, 00:17:46.907 "uuid": "bb142ef9-bfbe-4213-a27d-230d2e437751", 00:17:46.907 "assigned_rate_limits": { 00:17:46.907 "rw_ios_per_sec": 0, 00:17:46.907 "rw_mbytes_per_sec": 0, 00:17:46.907 "r_mbytes_per_sec": 0, 00:17:46.907 "w_mbytes_per_sec": 0 00:17:46.907 }, 00:17:46.907 "claimed": false, 00:17:46.907 "zoned": false, 00:17:46.907 "supported_io_types": { 00:17:46.907 "read": true, 00:17:46.907 "write": true, 00:17:46.907 "unmap": false, 00:17:46.907 "flush": false, 00:17:46.907 "reset": true, 00:17:46.907 "nvme_admin": false, 00:17:46.907 "nvme_io": false, 00:17:46.907 "nvme_io_md": false, 00:17:46.907 "write_zeroes": true, 00:17:46.907 "zcopy": false, 00:17:46.907 "get_zone_info": false, 00:17:46.907 "zone_management": false, 00:17:46.907 "zone_append": false, 00:17:46.907 "compare": false, 00:17:46.907 "compare_and_write": false, 00:17:46.907 "abort": false, 00:17:46.907 "seek_hole": false, 00:17:46.907 "seek_data": false, 00:17:46.907 "copy": false, 00:17:46.907 "nvme_iov_md": false 00:17:46.907 }, 00:17:46.907 "driver_specific": { 00:17:46.907 "raid": { 00:17:46.907 "uuid": "bb142ef9-bfbe-4213-a27d-230d2e437751", 00:17:46.907 "strip_size_kb": 64, 00:17:46.907 "state": "online", 00:17:46.907 "raid_level": "raid5f", 00:17:46.907 "superblock": false, 00:17:46.907 "num_base_bdevs": 3, 00:17:46.907 "num_base_bdevs_discovered": 3, 00:17:46.907 "num_base_bdevs_operational": 3, 00:17:46.907 "base_bdevs_list": [ 00:17:46.907 { 00:17:46.907 "name": "BaseBdev1", 00:17:46.907 "uuid": "b77c96a3-780d-4165-a556-869ba1e0f6c2", 00:17:46.907 "is_configured": true, 00:17:46.907 "data_offset": 0, 00:17:46.907 "data_size": 65536 00:17:46.907 }, 00:17:46.907 { 00:17:46.907 "name": "BaseBdev2", 00:17:46.907 "uuid": "be2c8174-3705-43ab-bef1-9af845c8dd9d", 00:17:46.907 "is_configured": true, 00:17:46.907 "data_offset": 0, 00:17:46.907 "data_size": 65536 00:17:46.907 }, 00:17:46.907 { 00:17:46.907 "name": "BaseBdev3", 00:17:46.907 "uuid": "de4c4db0-06ef-4db1-93e7-d80facef0d82", 00:17:46.907 "is_configured": true, 00:17:46.907 "data_offset": 0, 00:17:46.907 "data_size": 65536 00:17:46.907 } 00:17:46.907 ] 00:17:46.907 } 00:17:46.907 } 00:17:46.907 }' 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:46.907 BaseBdev2 00:17:46.907 BaseBdev3' 00:17:46.907 14:43:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.166 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.166 [2024-11-04 14:43:46.223115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.425 "name": "Existed_Raid", 00:17:47.425 "uuid": "bb142ef9-bfbe-4213-a27d-230d2e437751", 00:17:47.425 "strip_size_kb": 64, 00:17:47.425 "state": "online", 00:17:47.425 "raid_level": "raid5f", 00:17:47.425 "superblock": false, 00:17:47.425 "num_base_bdevs": 3, 00:17:47.425 "num_base_bdevs_discovered": 2, 00:17:47.425 "num_base_bdevs_operational": 2, 00:17:47.425 "base_bdevs_list": [ 00:17:47.425 { 00:17:47.425 "name": null, 00:17:47.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.425 "is_configured": false, 00:17:47.425 "data_offset": 0, 00:17:47.425 "data_size": 65536 00:17:47.425 }, 00:17:47.425 { 00:17:47.425 "name": "BaseBdev2", 00:17:47.425 "uuid": "be2c8174-3705-43ab-bef1-9af845c8dd9d", 00:17:47.425 "is_configured": true, 00:17:47.425 "data_offset": 0, 00:17:47.425 "data_size": 65536 00:17:47.425 }, 00:17:47.425 { 00:17:47.425 "name": "BaseBdev3", 00:17:47.425 "uuid": "de4c4db0-06ef-4db1-93e7-d80facef0d82", 00:17:47.425 "is_configured": true, 00:17:47.425 "data_offset": 0, 00:17:47.425 "data_size": 65536 00:17:47.425 } 00:17:47.425 ] 00:17:47.425 }' 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.425 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.993 14:43:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.993 [2024-11-04 14:43:46.914621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:47.993 [2024-11-04 14:43:46.914773] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.993 [2024-11-04 14:43:47.000348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.993 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.993 [2024-11-04 14:43:47.064483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:47.993 [2024-11-04 14:43:47.064564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.253 BaseBdev2 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.253 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.254 [ 00:17:48.254 { 00:17:48.254 "name": "BaseBdev2", 00:17:48.254 "aliases": [ 00:17:48.254 "4fefce18-11d1-4d31-a097-01ab7c8ec12e" 00:17:48.254 ], 00:17:48.254 "product_name": "Malloc disk", 00:17:48.254 "block_size": 512, 00:17:48.254 "num_blocks": 65536, 00:17:48.254 "uuid": "4fefce18-11d1-4d31-a097-01ab7c8ec12e", 00:17:48.254 "assigned_rate_limits": { 00:17:48.254 "rw_ios_per_sec": 0, 00:17:48.254 "rw_mbytes_per_sec": 0, 00:17:48.254 "r_mbytes_per_sec": 0, 00:17:48.254 "w_mbytes_per_sec": 0 00:17:48.254 }, 00:17:48.254 "claimed": false, 00:17:48.254 "zoned": false, 00:17:48.254 "supported_io_types": { 00:17:48.254 "read": true, 00:17:48.254 "write": true, 00:17:48.254 "unmap": true, 00:17:48.254 "flush": true, 00:17:48.254 "reset": true, 00:17:48.254 "nvme_admin": false, 00:17:48.254 "nvme_io": false, 00:17:48.254 "nvme_io_md": false, 00:17:48.254 "write_zeroes": true, 00:17:48.254 "zcopy": true, 00:17:48.254 "get_zone_info": false, 00:17:48.254 "zone_management": false, 00:17:48.254 "zone_append": false, 00:17:48.254 "compare": false, 00:17:48.254 "compare_and_write": false, 00:17:48.254 "abort": true, 00:17:48.254 "seek_hole": false, 00:17:48.254 "seek_data": false, 00:17:48.254 "copy": true, 00:17:48.254 "nvme_iov_md": false 00:17:48.254 }, 00:17:48.254 "memory_domains": [ 00:17:48.254 { 00:17:48.254 "dma_device_id": "system", 00:17:48.254 "dma_device_type": 1 00:17:48.254 }, 00:17:48.254 { 00:17:48.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.254 "dma_device_type": 2 00:17:48.254 } 00:17:48.254 ], 00:17:48.254 "driver_specific": {} 00:17:48.254 } 00:17:48.254 ] 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.254 BaseBdev3 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.254 [ 00:17:48.254 { 00:17:48.254 "name": "BaseBdev3", 00:17:48.254 "aliases": [ 00:17:48.254 "9d38381e-d9b2-4ad4-b177-9d007d3436da" 00:17:48.254 ], 00:17:48.254 "product_name": "Malloc disk", 00:17:48.254 "block_size": 512, 00:17:48.254 "num_blocks": 65536, 00:17:48.254 "uuid": "9d38381e-d9b2-4ad4-b177-9d007d3436da", 00:17:48.254 "assigned_rate_limits": { 00:17:48.254 "rw_ios_per_sec": 0, 00:17:48.254 "rw_mbytes_per_sec": 0, 00:17:48.254 "r_mbytes_per_sec": 0, 00:17:48.254 "w_mbytes_per_sec": 0 00:17:48.254 }, 00:17:48.254 "claimed": false, 00:17:48.254 "zoned": false, 00:17:48.254 "supported_io_types": { 00:17:48.254 "read": true, 00:17:48.254 "write": true, 00:17:48.254 "unmap": true, 00:17:48.254 "flush": true, 00:17:48.254 "reset": true, 00:17:48.254 "nvme_admin": false, 00:17:48.254 "nvme_io": false, 00:17:48.254 "nvme_io_md": false, 00:17:48.254 "write_zeroes": true, 00:17:48.254 "zcopy": true, 00:17:48.254 "get_zone_info": false, 00:17:48.254 "zone_management": false, 00:17:48.254 "zone_append": false, 00:17:48.254 "compare": false, 00:17:48.254 "compare_and_write": false, 00:17:48.254 "abort": true, 00:17:48.254 "seek_hole": false, 00:17:48.254 "seek_data": false, 00:17:48.254 "copy": true, 00:17:48.254 "nvme_iov_md": false 00:17:48.254 }, 00:17:48.254 "memory_domains": [ 00:17:48.254 { 00:17:48.254 "dma_device_id": "system", 00:17:48.254 "dma_device_type": 1 00:17:48.254 }, 00:17:48.254 { 00:17:48.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.254 "dma_device_type": 2 00:17:48.254 } 00:17:48.254 ], 00:17:48.254 "driver_specific": {} 00:17:48.254 } 00:17:48.254 ] 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.254 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.255 [2024-11-04 14:43:47.367223] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.255 [2024-11-04 14:43:47.367278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.255 [2024-11-04 14:43:47.367311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.255 [2024-11-04 14:43:47.369782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.255 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.514 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.514 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.514 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.514 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.514 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.514 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.514 "name": "Existed_Raid", 00:17:48.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.514 "strip_size_kb": 64, 00:17:48.514 "state": "configuring", 00:17:48.514 "raid_level": "raid5f", 00:17:48.514 "superblock": false, 00:17:48.514 "num_base_bdevs": 3, 00:17:48.514 "num_base_bdevs_discovered": 2, 00:17:48.514 "num_base_bdevs_operational": 3, 00:17:48.514 "base_bdevs_list": [ 00:17:48.514 { 00:17:48.514 "name": "BaseBdev1", 00:17:48.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.514 "is_configured": false, 00:17:48.514 "data_offset": 0, 00:17:48.514 "data_size": 0 00:17:48.514 }, 00:17:48.514 { 00:17:48.514 "name": "BaseBdev2", 00:17:48.514 "uuid": "4fefce18-11d1-4d31-a097-01ab7c8ec12e", 00:17:48.514 "is_configured": true, 00:17:48.514 "data_offset": 0, 00:17:48.514 "data_size": 65536 00:17:48.514 }, 00:17:48.514 { 00:17:48.514 "name": "BaseBdev3", 00:17:48.514 "uuid": "9d38381e-d9b2-4ad4-b177-9d007d3436da", 00:17:48.514 "is_configured": true, 00:17:48.514 "data_offset": 0, 00:17:48.514 "data_size": 65536 00:17:48.514 } 00:17:48.514 ] 00:17:48.514 }' 00:17:48.514 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.514 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.773 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:48.773 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.773 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.773 [2024-11-04 14:43:47.891362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.032 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.033 "name": "Existed_Raid", 00:17:49.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.033 "strip_size_kb": 64, 00:17:49.033 "state": "configuring", 00:17:49.033 "raid_level": "raid5f", 00:17:49.033 "superblock": false, 00:17:49.033 "num_base_bdevs": 3, 00:17:49.033 "num_base_bdevs_discovered": 1, 00:17:49.033 "num_base_bdevs_operational": 3, 00:17:49.033 "base_bdevs_list": [ 00:17:49.033 { 00:17:49.033 "name": "BaseBdev1", 00:17:49.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.033 "is_configured": false, 00:17:49.033 "data_offset": 0, 00:17:49.033 "data_size": 0 00:17:49.033 }, 00:17:49.033 { 00:17:49.033 "name": null, 00:17:49.033 "uuid": "4fefce18-11d1-4d31-a097-01ab7c8ec12e", 00:17:49.033 "is_configured": false, 00:17:49.033 "data_offset": 0, 00:17:49.033 "data_size": 65536 00:17:49.033 }, 00:17:49.033 { 00:17:49.033 "name": "BaseBdev3", 00:17:49.033 "uuid": "9d38381e-d9b2-4ad4-b177-9d007d3436da", 00:17:49.033 "is_configured": true, 00:17:49.033 "data_offset": 0, 00:17:49.033 "data_size": 65536 00:17:49.033 } 00:17:49.033 ] 00:17:49.033 }' 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.033 14:43:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.600 [2024-11-04 14:43:48.510591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:49.600 BaseBdev1 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.600 [ 00:17:49.600 { 00:17:49.600 "name": "BaseBdev1", 00:17:49.600 "aliases": [ 00:17:49.600 "15134fd8-76d8-4be2-9d47-4aff531a7c44" 00:17:49.600 ], 00:17:49.600 "product_name": "Malloc disk", 00:17:49.600 "block_size": 512, 00:17:49.600 "num_blocks": 65536, 00:17:49.600 "uuid": "15134fd8-76d8-4be2-9d47-4aff531a7c44", 00:17:49.600 "assigned_rate_limits": { 00:17:49.600 "rw_ios_per_sec": 0, 00:17:49.600 "rw_mbytes_per_sec": 0, 00:17:49.600 "r_mbytes_per_sec": 0, 00:17:49.600 "w_mbytes_per_sec": 0 00:17:49.600 }, 00:17:49.600 "claimed": true, 00:17:49.600 "claim_type": "exclusive_write", 00:17:49.600 "zoned": false, 00:17:49.600 "supported_io_types": { 00:17:49.600 "read": true, 00:17:49.600 "write": true, 00:17:49.600 "unmap": true, 00:17:49.600 "flush": true, 00:17:49.600 "reset": true, 00:17:49.600 "nvme_admin": false, 00:17:49.600 "nvme_io": false, 00:17:49.600 "nvme_io_md": false, 00:17:49.600 "write_zeroes": true, 00:17:49.600 "zcopy": true, 00:17:49.600 "get_zone_info": false, 00:17:49.600 "zone_management": false, 00:17:49.600 "zone_append": false, 00:17:49.600 "compare": false, 00:17:49.600 "compare_and_write": false, 00:17:49.600 "abort": true, 00:17:49.600 "seek_hole": false, 00:17:49.600 "seek_data": false, 00:17:49.600 "copy": true, 00:17:49.600 "nvme_iov_md": false 00:17:49.600 }, 00:17:49.600 "memory_domains": [ 00:17:49.600 { 00:17:49.600 "dma_device_id": "system", 00:17:49.600 "dma_device_type": 1 00:17:49.600 }, 00:17:49.600 { 00:17:49.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.600 "dma_device_type": 2 00:17:49.600 } 00:17:49.600 ], 00:17:49.600 "driver_specific": {} 00:17:49.600 } 00:17:49.600 ] 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.600 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.601 "name": "Existed_Raid", 00:17:49.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.601 "strip_size_kb": 64, 00:17:49.601 "state": "configuring", 00:17:49.601 "raid_level": "raid5f", 00:17:49.601 "superblock": false, 00:17:49.601 "num_base_bdevs": 3, 00:17:49.601 "num_base_bdevs_discovered": 2, 00:17:49.601 "num_base_bdevs_operational": 3, 00:17:49.601 "base_bdevs_list": [ 00:17:49.601 { 00:17:49.601 "name": "BaseBdev1", 00:17:49.601 "uuid": "15134fd8-76d8-4be2-9d47-4aff531a7c44", 00:17:49.601 "is_configured": true, 00:17:49.601 "data_offset": 0, 00:17:49.601 "data_size": 65536 00:17:49.601 }, 00:17:49.601 { 00:17:49.601 "name": null, 00:17:49.601 "uuid": "4fefce18-11d1-4d31-a097-01ab7c8ec12e", 00:17:49.601 "is_configured": false, 00:17:49.601 "data_offset": 0, 00:17:49.601 "data_size": 65536 00:17:49.601 }, 00:17:49.601 { 00:17:49.601 "name": "BaseBdev3", 00:17:49.601 "uuid": "9d38381e-d9b2-4ad4-b177-9d007d3436da", 00:17:49.601 "is_configured": true, 00:17:49.601 "data_offset": 0, 00:17:49.601 "data_size": 65536 00:17:49.601 } 00:17:49.601 ] 00:17:49.601 }' 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.601 14:43:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.168 [2024-11-04 14:43:49.130836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.168 "name": "Existed_Raid", 00:17:50.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.168 "strip_size_kb": 64, 00:17:50.168 "state": "configuring", 00:17:50.168 "raid_level": "raid5f", 00:17:50.168 "superblock": false, 00:17:50.168 "num_base_bdevs": 3, 00:17:50.168 "num_base_bdevs_discovered": 1, 00:17:50.168 "num_base_bdevs_operational": 3, 00:17:50.168 "base_bdevs_list": [ 00:17:50.168 { 00:17:50.168 "name": "BaseBdev1", 00:17:50.168 "uuid": "15134fd8-76d8-4be2-9d47-4aff531a7c44", 00:17:50.168 "is_configured": true, 00:17:50.168 "data_offset": 0, 00:17:50.168 "data_size": 65536 00:17:50.168 }, 00:17:50.168 { 00:17:50.168 "name": null, 00:17:50.168 "uuid": "4fefce18-11d1-4d31-a097-01ab7c8ec12e", 00:17:50.168 "is_configured": false, 00:17:50.168 "data_offset": 0, 00:17:50.168 "data_size": 65536 00:17:50.168 }, 00:17:50.168 { 00:17:50.168 "name": null, 00:17:50.168 "uuid": "9d38381e-d9b2-4ad4-b177-9d007d3436da", 00:17:50.168 "is_configured": false, 00:17:50.168 "data_offset": 0, 00:17:50.168 "data_size": 65536 00:17:50.168 } 00:17:50.168 ] 00:17:50.168 }' 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.168 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.739 [2024-11-04 14:43:49.755071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.739 "name": "Existed_Raid", 00:17:50.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.739 "strip_size_kb": 64, 00:17:50.739 "state": "configuring", 00:17:50.739 "raid_level": "raid5f", 00:17:50.739 "superblock": false, 00:17:50.739 "num_base_bdevs": 3, 00:17:50.739 "num_base_bdevs_discovered": 2, 00:17:50.739 "num_base_bdevs_operational": 3, 00:17:50.739 "base_bdevs_list": [ 00:17:50.739 { 00:17:50.739 "name": "BaseBdev1", 00:17:50.739 "uuid": "15134fd8-76d8-4be2-9d47-4aff531a7c44", 00:17:50.739 "is_configured": true, 00:17:50.739 "data_offset": 0, 00:17:50.739 "data_size": 65536 00:17:50.739 }, 00:17:50.739 { 00:17:50.739 "name": null, 00:17:50.739 "uuid": "4fefce18-11d1-4d31-a097-01ab7c8ec12e", 00:17:50.739 "is_configured": false, 00:17:50.739 "data_offset": 0, 00:17:50.739 "data_size": 65536 00:17:50.739 }, 00:17:50.739 { 00:17:50.739 "name": "BaseBdev3", 00:17:50.739 "uuid": "9d38381e-d9b2-4ad4-b177-9d007d3436da", 00:17:50.739 "is_configured": true, 00:17:50.739 "data_offset": 0, 00:17:50.739 "data_size": 65536 00:17:50.739 } 00:17:50.739 ] 00:17:50.739 }' 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.739 14:43:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.305 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.305 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.305 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.305 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:51.305 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.305 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:51.305 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:51.305 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.305 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.305 [2024-11-04 14:43:50.363306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.564 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.564 "name": "Existed_Raid", 00:17:51.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.564 "strip_size_kb": 64, 00:17:51.564 "state": "configuring", 00:17:51.564 "raid_level": "raid5f", 00:17:51.565 "superblock": false, 00:17:51.565 "num_base_bdevs": 3, 00:17:51.565 "num_base_bdevs_discovered": 1, 00:17:51.565 "num_base_bdevs_operational": 3, 00:17:51.565 "base_bdevs_list": [ 00:17:51.565 { 00:17:51.565 "name": null, 00:17:51.565 "uuid": "15134fd8-76d8-4be2-9d47-4aff531a7c44", 00:17:51.565 "is_configured": false, 00:17:51.565 "data_offset": 0, 00:17:51.565 "data_size": 65536 00:17:51.565 }, 00:17:51.565 { 00:17:51.565 "name": null, 00:17:51.565 "uuid": "4fefce18-11d1-4d31-a097-01ab7c8ec12e", 00:17:51.565 "is_configured": false, 00:17:51.565 "data_offset": 0, 00:17:51.565 "data_size": 65536 00:17:51.565 }, 00:17:51.565 { 00:17:51.565 "name": "BaseBdev3", 00:17:51.565 "uuid": "9d38381e-d9b2-4ad4-b177-9d007d3436da", 00:17:51.565 "is_configured": true, 00:17:51.565 "data_offset": 0, 00:17:51.565 "data_size": 65536 00:17:51.565 } 00:17:51.565 ] 00:17:51.565 }' 00:17:51.565 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.565 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.133 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:52.133 14:43:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.133 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.133 14:43:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.133 [2024-11-04 14:43:51.042526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.133 "name": "Existed_Raid", 00:17:52.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.133 "strip_size_kb": 64, 00:17:52.133 "state": "configuring", 00:17:52.133 "raid_level": "raid5f", 00:17:52.133 "superblock": false, 00:17:52.133 "num_base_bdevs": 3, 00:17:52.133 "num_base_bdevs_discovered": 2, 00:17:52.133 "num_base_bdevs_operational": 3, 00:17:52.133 "base_bdevs_list": [ 00:17:52.133 { 00:17:52.133 "name": null, 00:17:52.133 "uuid": "15134fd8-76d8-4be2-9d47-4aff531a7c44", 00:17:52.133 "is_configured": false, 00:17:52.133 "data_offset": 0, 00:17:52.133 "data_size": 65536 00:17:52.133 }, 00:17:52.133 { 00:17:52.133 "name": "BaseBdev2", 00:17:52.133 "uuid": "4fefce18-11d1-4d31-a097-01ab7c8ec12e", 00:17:52.133 "is_configured": true, 00:17:52.133 "data_offset": 0, 00:17:52.133 "data_size": 65536 00:17:52.133 }, 00:17:52.133 { 00:17:52.133 "name": "BaseBdev3", 00:17:52.133 "uuid": "9d38381e-d9b2-4ad4-b177-9d007d3436da", 00:17:52.133 "is_configured": true, 00:17:52.133 "data_offset": 0, 00:17:52.133 "data_size": 65536 00:17:52.133 } 00:17:52.133 ] 00:17:52.133 }' 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.133 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 15134fd8-76d8-4be2-9d47-4aff531a7c44 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.701 [2024-11-04 14:43:51.713708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:52.701 [2024-11-04 14:43:51.713793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:52.701 [2024-11-04 14:43:51.713809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:52.701 [2024-11-04 14:43:51.714163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:52.701 [2024-11-04 14:43:51.719102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:52.701 [2024-11-04 14:43:51.719132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:52.701 [2024-11-04 14:43:51.719460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.701 NewBaseBdev 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:52.701 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.702 [ 00:17:52.702 { 00:17:52.702 "name": "NewBaseBdev", 00:17:52.702 "aliases": [ 00:17:52.702 "15134fd8-76d8-4be2-9d47-4aff531a7c44" 00:17:52.702 ], 00:17:52.702 "product_name": "Malloc disk", 00:17:52.702 "block_size": 512, 00:17:52.702 "num_blocks": 65536, 00:17:52.702 "uuid": "15134fd8-76d8-4be2-9d47-4aff531a7c44", 00:17:52.702 "assigned_rate_limits": { 00:17:52.702 "rw_ios_per_sec": 0, 00:17:52.702 "rw_mbytes_per_sec": 0, 00:17:52.702 "r_mbytes_per_sec": 0, 00:17:52.702 "w_mbytes_per_sec": 0 00:17:52.702 }, 00:17:52.702 "claimed": true, 00:17:52.702 "claim_type": "exclusive_write", 00:17:52.702 "zoned": false, 00:17:52.702 "supported_io_types": { 00:17:52.702 "read": true, 00:17:52.702 "write": true, 00:17:52.702 "unmap": true, 00:17:52.702 "flush": true, 00:17:52.702 "reset": true, 00:17:52.702 "nvme_admin": false, 00:17:52.702 "nvme_io": false, 00:17:52.702 "nvme_io_md": false, 00:17:52.702 "write_zeroes": true, 00:17:52.702 "zcopy": true, 00:17:52.702 "get_zone_info": false, 00:17:52.702 "zone_management": false, 00:17:52.702 "zone_append": false, 00:17:52.702 "compare": false, 00:17:52.702 "compare_and_write": false, 00:17:52.702 "abort": true, 00:17:52.702 "seek_hole": false, 00:17:52.702 "seek_data": false, 00:17:52.702 "copy": true, 00:17:52.702 "nvme_iov_md": false 00:17:52.702 }, 00:17:52.702 "memory_domains": [ 00:17:52.702 { 00:17:52.702 "dma_device_id": "system", 00:17:52.702 "dma_device_type": 1 00:17:52.702 }, 00:17:52.702 { 00:17:52.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.702 "dma_device_type": 2 00:17:52.702 } 00:17:52.702 ], 00:17:52.702 "driver_specific": {} 00:17:52.702 } 00:17:52.702 ] 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.702 "name": "Existed_Raid", 00:17:52.702 "uuid": "903e911e-7e23-4ec9-8c6d-d45e812e7f28", 00:17:52.702 "strip_size_kb": 64, 00:17:52.702 "state": "online", 00:17:52.702 "raid_level": "raid5f", 00:17:52.702 "superblock": false, 00:17:52.702 "num_base_bdevs": 3, 00:17:52.702 "num_base_bdevs_discovered": 3, 00:17:52.702 "num_base_bdevs_operational": 3, 00:17:52.702 "base_bdevs_list": [ 00:17:52.702 { 00:17:52.702 "name": "NewBaseBdev", 00:17:52.702 "uuid": "15134fd8-76d8-4be2-9d47-4aff531a7c44", 00:17:52.702 "is_configured": true, 00:17:52.702 "data_offset": 0, 00:17:52.702 "data_size": 65536 00:17:52.702 }, 00:17:52.702 { 00:17:52.702 "name": "BaseBdev2", 00:17:52.702 "uuid": "4fefce18-11d1-4d31-a097-01ab7c8ec12e", 00:17:52.702 "is_configured": true, 00:17:52.702 "data_offset": 0, 00:17:52.702 "data_size": 65536 00:17:52.702 }, 00:17:52.702 { 00:17:52.702 "name": "BaseBdev3", 00:17:52.702 "uuid": "9d38381e-d9b2-4ad4-b177-9d007d3436da", 00:17:52.702 "is_configured": true, 00:17:52.702 "data_offset": 0, 00:17:52.702 "data_size": 65536 00:17:52.702 } 00:17:52.702 ] 00:17:52.702 }' 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.702 14:43:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.270 [2024-11-04 14:43:52.265452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:53.270 "name": "Existed_Raid", 00:17:53.270 "aliases": [ 00:17:53.270 "903e911e-7e23-4ec9-8c6d-d45e812e7f28" 00:17:53.270 ], 00:17:53.270 "product_name": "Raid Volume", 00:17:53.270 "block_size": 512, 00:17:53.270 "num_blocks": 131072, 00:17:53.270 "uuid": "903e911e-7e23-4ec9-8c6d-d45e812e7f28", 00:17:53.270 "assigned_rate_limits": { 00:17:53.270 "rw_ios_per_sec": 0, 00:17:53.270 "rw_mbytes_per_sec": 0, 00:17:53.270 "r_mbytes_per_sec": 0, 00:17:53.270 "w_mbytes_per_sec": 0 00:17:53.270 }, 00:17:53.270 "claimed": false, 00:17:53.270 "zoned": false, 00:17:53.270 "supported_io_types": { 00:17:53.270 "read": true, 00:17:53.270 "write": true, 00:17:53.270 "unmap": false, 00:17:53.270 "flush": false, 00:17:53.270 "reset": true, 00:17:53.270 "nvme_admin": false, 00:17:53.270 "nvme_io": false, 00:17:53.270 "nvme_io_md": false, 00:17:53.270 "write_zeroes": true, 00:17:53.270 "zcopy": false, 00:17:53.270 "get_zone_info": false, 00:17:53.270 "zone_management": false, 00:17:53.270 "zone_append": false, 00:17:53.270 "compare": false, 00:17:53.270 "compare_and_write": false, 00:17:53.270 "abort": false, 00:17:53.270 "seek_hole": false, 00:17:53.270 "seek_data": false, 00:17:53.270 "copy": false, 00:17:53.270 "nvme_iov_md": false 00:17:53.270 }, 00:17:53.270 "driver_specific": { 00:17:53.270 "raid": { 00:17:53.270 "uuid": "903e911e-7e23-4ec9-8c6d-d45e812e7f28", 00:17:53.270 "strip_size_kb": 64, 00:17:53.270 "state": "online", 00:17:53.270 "raid_level": "raid5f", 00:17:53.270 "superblock": false, 00:17:53.270 "num_base_bdevs": 3, 00:17:53.270 "num_base_bdevs_discovered": 3, 00:17:53.270 "num_base_bdevs_operational": 3, 00:17:53.270 "base_bdevs_list": [ 00:17:53.270 { 00:17:53.270 "name": "NewBaseBdev", 00:17:53.270 "uuid": "15134fd8-76d8-4be2-9d47-4aff531a7c44", 00:17:53.270 "is_configured": true, 00:17:53.270 "data_offset": 0, 00:17:53.270 "data_size": 65536 00:17:53.270 }, 00:17:53.270 { 00:17:53.270 "name": "BaseBdev2", 00:17:53.270 "uuid": "4fefce18-11d1-4d31-a097-01ab7c8ec12e", 00:17:53.270 "is_configured": true, 00:17:53.270 "data_offset": 0, 00:17:53.270 "data_size": 65536 00:17:53.270 }, 00:17:53.270 { 00:17:53.270 "name": "BaseBdev3", 00:17:53.270 "uuid": "9d38381e-d9b2-4ad4-b177-9d007d3436da", 00:17:53.270 "is_configured": true, 00:17:53.270 "data_offset": 0, 00:17:53.270 "data_size": 65536 00:17:53.270 } 00:17:53.270 ] 00:17:53.270 } 00:17:53.270 } 00:17:53.270 }' 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:53.270 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:53.270 BaseBdev2 00:17:53.270 BaseBdev3' 00:17:53.271 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.529 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.529 [2024-11-04 14:43:52.581305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.529 [2024-11-04 14:43:52.581343] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.529 [2024-11-04 14:43:52.581457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.530 [2024-11-04 14:43:52.581810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.530 [2024-11-04 14:43:52.581845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80226 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80226 ']' 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80226 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80226 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:53.530 killing process with pid 80226 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80226' 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80226 00:17:53.530 [2024-11-04 14:43:52.618631] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.530 14:43:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80226 00:17:53.788 [2024-11-04 14:43:52.895469] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.162 14:43:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:55.162 00:17:55.162 real 0m12.122s 00:17:55.162 user 0m20.171s 00:17:55.162 sys 0m1.708s 00:17:55.162 14:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:55.162 14:43:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.162 ************************************ 00:17:55.162 END TEST raid5f_state_function_test 00:17:55.162 ************************************ 00:17:55.162 14:43:53 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:17:55.162 14:43:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:55.162 14:43:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:55.162 14:43:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.162 ************************************ 00:17:55.162 START TEST raid5f_state_function_test_sb 00:17:55.162 ************************************ 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80859 00:17:55.162 Process raid pid: 80859 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80859' 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80859 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80859 ']' 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:55.162 14:43:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.162 [2024-11-04 14:43:54.124602] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:17:55.162 [2024-11-04 14:43:54.124794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.420 [2024-11-04 14:43:54.317846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.420 [2024-11-04 14:43:54.484261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.678 [2024-11-04 14:43:54.701227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.678 [2024-11-04 14:43:54.701274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.246 [2024-11-04 14:43:55.166259] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:56.246 [2024-11-04 14:43:55.166318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:56.246 [2024-11-04 14:43:55.166334] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.246 [2024-11-04 14:43:55.166364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.246 [2024-11-04 14:43:55.166389] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.246 [2024-11-04 14:43:55.166417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.246 "name": "Existed_Raid", 00:17:56.246 "uuid": "8dab12cd-fcf2-4bab-93cc-6b9fa75f016b", 00:17:56.246 "strip_size_kb": 64, 00:17:56.246 "state": "configuring", 00:17:56.246 "raid_level": "raid5f", 00:17:56.246 "superblock": true, 00:17:56.246 "num_base_bdevs": 3, 00:17:56.246 "num_base_bdevs_discovered": 0, 00:17:56.246 "num_base_bdevs_operational": 3, 00:17:56.246 "base_bdevs_list": [ 00:17:56.246 { 00:17:56.246 "name": "BaseBdev1", 00:17:56.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.246 "is_configured": false, 00:17:56.246 "data_offset": 0, 00:17:56.246 "data_size": 0 00:17:56.246 }, 00:17:56.246 { 00:17:56.246 "name": "BaseBdev2", 00:17:56.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.246 "is_configured": false, 00:17:56.246 "data_offset": 0, 00:17:56.246 "data_size": 0 00:17:56.246 }, 00:17:56.246 { 00:17:56.246 "name": "BaseBdev3", 00:17:56.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.246 "is_configured": false, 00:17:56.246 "data_offset": 0, 00:17:56.246 "data_size": 0 00:17:56.246 } 00:17:56.246 ] 00:17:56.246 }' 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.246 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.813 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.813 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.813 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.813 [2024-11-04 14:43:55.674445] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.813 [2024-11-04 14:43:55.674525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:56.813 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.813 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:56.813 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.813 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.813 [2024-11-04 14:43:55.682333] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:56.813 [2024-11-04 14:43:55.682412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:56.813 [2024-11-04 14:43:55.682426] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.813 [2024-11-04 14:43:55.682442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.813 [2024-11-04 14:43:55.682451] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.813 [2024-11-04 14:43:55.682464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.813 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.813 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.814 [2024-11-04 14:43:55.729082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.814 BaseBdev1 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.814 [ 00:17:56.814 { 00:17:56.814 "name": "BaseBdev1", 00:17:56.814 "aliases": [ 00:17:56.814 "a7cf2e90-cee4-4049-992b-d93f6d73d33a" 00:17:56.814 ], 00:17:56.814 "product_name": "Malloc disk", 00:17:56.814 "block_size": 512, 00:17:56.814 "num_blocks": 65536, 00:17:56.814 "uuid": "a7cf2e90-cee4-4049-992b-d93f6d73d33a", 00:17:56.814 "assigned_rate_limits": { 00:17:56.814 "rw_ios_per_sec": 0, 00:17:56.814 "rw_mbytes_per_sec": 0, 00:17:56.814 "r_mbytes_per_sec": 0, 00:17:56.814 "w_mbytes_per_sec": 0 00:17:56.814 }, 00:17:56.814 "claimed": true, 00:17:56.814 "claim_type": "exclusive_write", 00:17:56.814 "zoned": false, 00:17:56.814 "supported_io_types": { 00:17:56.814 "read": true, 00:17:56.814 "write": true, 00:17:56.814 "unmap": true, 00:17:56.814 "flush": true, 00:17:56.814 "reset": true, 00:17:56.814 "nvme_admin": false, 00:17:56.814 "nvme_io": false, 00:17:56.814 "nvme_io_md": false, 00:17:56.814 "write_zeroes": true, 00:17:56.814 "zcopy": true, 00:17:56.814 "get_zone_info": false, 00:17:56.814 "zone_management": false, 00:17:56.814 "zone_append": false, 00:17:56.814 "compare": false, 00:17:56.814 "compare_and_write": false, 00:17:56.814 "abort": true, 00:17:56.814 "seek_hole": false, 00:17:56.814 "seek_data": false, 00:17:56.814 "copy": true, 00:17:56.814 "nvme_iov_md": false 00:17:56.814 }, 00:17:56.814 "memory_domains": [ 00:17:56.814 { 00:17:56.814 "dma_device_id": "system", 00:17:56.814 "dma_device_type": 1 00:17:56.814 }, 00:17:56.814 { 00:17:56.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.814 "dma_device_type": 2 00:17:56.814 } 00:17:56.814 ], 00:17:56.814 "driver_specific": {} 00:17:56.814 } 00:17:56.814 ] 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.814 "name": "Existed_Raid", 00:17:56.814 "uuid": "67546faf-b990-4c34-acb3-9003789a7190", 00:17:56.814 "strip_size_kb": 64, 00:17:56.814 "state": "configuring", 00:17:56.814 "raid_level": "raid5f", 00:17:56.814 "superblock": true, 00:17:56.814 "num_base_bdevs": 3, 00:17:56.814 "num_base_bdevs_discovered": 1, 00:17:56.814 "num_base_bdevs_operational": 3, 00:17:56.814 "base_bdevs_list": [ 00:17:56.814 { 00:17:56.814 "name": "BaseBdev1", 00:17:56.814 "uuid": "a7cf2e90-cee4-4049-992b-d93f6d73d33a", 00:17:56.814 "is_configured": true, 00:17:56.814 "data_offset": 2048, 00:17:56.814 "data_size": 63488 00:17:56.814 }, 00:17:56.814 { 00:17:56.814 "name": "BaseBdev2", 00:17:56.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.814 "is_configured": false, 00:17:56.814 "data_offset": 0, 00:17:56.814 "data_size": 0 00:17:56.814 }, 00:17:56.814 { 00:17:56.814 "name": "BaseBdev3", 00:17:56.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.814 "is_configured": false, 00:17:56.814 "data_offset": 0, 00:17:56.814 "data_size": 0 00:17:56.814 } 00:17:56.814 ] 00:17:56.814 }' 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.814 14:43:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.382 [2024-11-04 14:43:56.293344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:57.382 [2024-11-04 14:43:56.293423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.382 [2024-11-04 14:43:56.301378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.382 [2024-11-04 14:43:56.303812] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.382 [2024-11-04 14:43:56.303878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.382 [2024-11-04 14:43:56.303894] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:57.382 [2024-11-04 14:43:56.303910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.382 "name": "Existed_Raid", 00:17:57.382 "uuid": "cb16e94c-b05b-4186-af16-c26d8f8e1fc5", 00:17:57.382 "strip_size_kb": 64, 00:17:57.382 "state": "configuring", 00:17:57.382 "raid_level": "raid5f", 00:17:57.382 "superblock": true, 00:17:57.382 "num_base_bdevs": 3, 00:17:57.382 "num_base_bdevs_discovered": 1, 00:17:57.382 "num_base_bdevs_operational": 3, 00:17:57.382 "base_bdevs_list": [ 00:17:57.382 { 00:17:57.382 "name": "BaseBdev1", 00:17:57.382 "uuid": "a7cf2e90-cee4-4049-992b-d93f6d73d33a", 00:17:57.382 "is_configured": true, 00:17:57.382 "data_offset": 2048, 00:17:57.382 "data_size": 63488 00:17:57.382 }, 00:17:57.382 { 00:17:57.382 "name": "BaseBdev2", 00:17:57.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.382 "is_configured": false, 00:17:57.382 "data_offset": 0, 00:17:57.382 "data_size": 0 00:17:57.382 }, 00:17:57.382 { 00:17:57.382 "name": "BaseBdev3", 00:17:57.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.382 "is_configured": false, 00:17:57.382 "data_offset": 0, 00:17:57.382 "data_size": 0 00:17:57.382 } 00:17:57.382 ] 00:17:57.382 }' 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.382 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.986 [2024-11-04 14:43:56.903618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.986 BaseBdev2 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.986 [ 00:17:57.986 { 00:17:57.986 "name": "BaseBdev2", 00:17:57.986 "aliases": [ 00:17:57.986 "6d9fdf0f-3102-4fd6-bb90-afab9951789d" 00:17:57.986 ], 00:17:57.986 "product_name": "Malloc disk", 00:17:57.986 "block_size": 512, 00:17:57.986 "num_blocks": 65536, 00:17:57.986 "uuid": "6d9fdf0f-3102-4fd6-bb90-afab9951789d", 00:17:57.986 "assigned_rate_limits": { 00:17:57.986 "rw_ios_per_sec": 0, 00:17:57.986 "rw_mbytes_per_sec": 0, 00:17:57.986 "r_mbytes_per_sec": 0, 00:17:57.986 "w_mbytes_per_sec": 0 00:17:57.986 }, 00:17:57.986 "claimed": true, 00:17:57.986 "claim_type": "exclusive_write", 00:17:57.986 "zoned": false, 00:17:57.986 "supported_io_types": { 00:17:57.986 "read": true, 00:17:57.986 "write": true, 00:17:57.986 "unmap": true, 00:17:57.986 "flush": true, 00:17:57.986 "reset": true, 00:17:57.986 "nvme_admin": false, 00:17:57.986 "nvme_io": false, 00:17:57.986 "nvme_io_md": false, 00:17:57.986 "write_zeroes": true, 00:17:57.986 "zcopy": true, 00:17:57.986 "get_zone_info": false, 00:17:57.986 "zone_management": false, 00:17:57.986 "zone_append": false, 00:17:57.986 "compare": false, 00:17:57.986 "compare_and_write": false, 00:17:57.986 "abort": true, 00:17:57.986 "seek_hole": false, 00:17:57.986 "seek_data": false, 00:17:57.986 "copy": true, 00:17:57.986 "nvme_iov_md": false 00:17:57.986 }, 00:17:57.986 "memory_domains": [ 00:17:57.986 { 00:17:57.986 "dma_device_id": "system", 00:17:57.986 "dma_device_type": 1 00:17:57.986 }, 00:17:57.986 { 00:17:57.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.986 "dma_device_type": 2 00:17:57.986 } 00:17:57.986 ], 00:17:57.986 "driver_specific": {} 00:17:57.986 } 00:17:57.986 ] 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.986 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.986 "name": "Existed_Raid", 00:17:57.986 "uuid": "cb16e94c-b05b-4186-af16-c26d8f8e1fc5", 00:17:57.986 "strip_size_kb": 64, 00:17:57.986 "state": "configuring", 00:17:57.986 "raid_level": "raid5f", 00:17:57.986 "superblock": true, 00:17:57.986 "num_base_bdevs": 3, 00:17:57.986 "num_base_bdevs_discovered": 2, 00:17:57.986 "num_base_bdevs_operational": 3, 00:17:57.987 "base_bdevs_list": [ 00:17:57.987 { 00:17:57.987 "name": "BaseBdev1", 00:17:57.987 "uuid": "a7cf2e90-cee4-4049-992b-d93f6d73d33a", 00:17:57.987 "is_configured": true, 00:17:57.987 "data_offset": 2048, 00:17:57.987 "data_size": 63488 00:17:57.987 }, 00:17:57.987 { 00:17:57.987 "name": "BaseBdev2", 00:17:57.987 "uuid": "6d9fdf0f-3102-4fd6-bb90-afab9951789d", 00:17:57.987 "is_configured": true, 00:17:57.987 "data_offset": 2048, 00:17:57.987 "data_size": 63488 00:17:57.987 }, 00:17:57.987 { 00:17:57.987 "name": "BaseBdev3", 00:17:57.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.987 "is_configured": false, 00:17:57.987 "data_offset": 0, 00:17:57.987 "data_size": 0 00:17:57.987 } 00:17:57.987 ] 00:17:57.987 }' 00:17:57.987 14:43:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.987 14:43:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.553 [2024-11-04 14:43:57.509309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.553 [2024-11-04 14:43:57.509664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:58.553 [2024-11-04 14:43:57.509698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:58.553 [2024-11-04 14:43:57.510058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:58.553 BaseBdev3 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.553 [2024-11-04 14:43:57.515464] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:58.553 [2024-11-04 14:43:57.515493] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:58.553 [2024-11-04 14:43:57.515867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.553 [ 00:17:58.553 { 00:17:58.553 "name": "BaseBdev3", 00:17:58.553 "aliases": [ 00:17:58.553 "7c5c43ed-2e91-4db1-aaba-f6a1b4ff8e67" 00:17:58.553 ], 00:17:58.553 "product_name": "Malloc disk", 00:17:58.553 "block_size": 512, 00:17:58.553 "num_blocks": 65536, 00:17:58.553 "uuid": "7c5c43ed-2e91-4db1-aaba-f6a1b4ff8e67", 00:17:58.553 "assigned_rate_limits": { 00:17:58.553 "rw_ios_per_sec": 0, 00:17:58.553 "rw_mbytes_per_sec": 0, 00:17:58.553 "r_mbytes_per_sec": 0, 00:17:58.553 "w_mbytes_per_sec": 0 00:17:58.553 }, 00:17:58.553 "claimed": true, 00:17:58.553 "claim_type": "exclusive_write", 00:17:58.553 "zoned": false, 00:17:58.553 "supported_io_types": { 00:17:58.553 "read": true, 00:17:58.553 "write": true, 00:17:58.553 "unmap": true, 00:17:58.553 "flush": true, 00:17:58.553 "reset": true, 00:17:58.553 "nvme_admin": false, 00:17:58.553 "nvme_io": false, 00:17:58.553 "nvme_io_md": false, 00:17:58.553 "write_zeroes": true, 00:17:58.553 "zcopy": true, 00:17:58.553 "get_zone_info": false, 00:17:58.553 "zone_management": false, 00:17:58.553 "zone_append": false, 00:17:58.553 "compare": false, 00:17:58.553 "compare_and_write": false, 00:17:58.553 "abort": true, 00:17:58.553 "seek_hole": false, 00:17:58.553 "seek_data": false, 00:17:58.553 "copy": true, 00:17:58.553 "nvme_iov_md": false 00:17:58.553 }, 00:17:58.553 "memory_domains": [ 00:17:58.553 { 00:17:58.553 "dma_device_id": "system", 00:17:58.553 "dma_device_type": 1 00:17:58.553 }, 00:17:58.553 { 00:17:58.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.553 "dma_device_type": 2 00:17:58.553 } 00:17:58.553 ], 00:17:58.553 "driver_specific": {} 00:17:58.553 } 00:17:58.553 ] 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.553 "name": "Existed_Raid", 00:17:58.553 "uuid": "cb16e94c-b05b-4186-af16-c26d8f8e1fc5", 00:17:58.553 "strip_size_kb": 64, 00:17:58.553 "state": "online", 00:17:58.553 "raid_level": "raid5f", 00:17:58.553 "superblock": true, 00:17:58.553 "num_base_bdevs": 3, 00:17:58.553 "num_base_bdevs_discovered": 3, 00:17:58.553 "num_base_bdevs_operational": 3, 00:17:58.553 "base_bdevs_list": [ 00:17:58.553 { 00:17:58.553 "name": "BaseBdev1", 00:17:58.553 "uuid": "a7cf2e90-cee4-4049-992b-d93f6d73d33a", 00:17:58.553 "is_configured": true, 00:17:58.553 "data_offset": 2048, 00:17:58.553 "data_size": 63488 00:17:58.553 }, 00:17:58.553 { 00:17:58.553 "name": "BaseBdev2", 00:17:58.553 "uuid": "6d9fdf0f-3102-4fd6-bb90-afab9951789d", 00:17:58.553 "is_configured": true, 00:17:58.553 "data_offset": 2048, 00:17:58.553 "data_size": 63488 00:17:58.553 }, 00:17:58.553 { 00:17:58.553 "name": "BaseBdev3", 00:17:58.553 "uuid": "7c5c43ed-2e91-4db1-aaba-f6a1b4ff8e67", 00:17:58.553 "is_configured": true, 00:17:58.553 "data_offset": 2048, 00:17:58.553 "data_size": 63488 00:17:58.553 } 00:17:58.553 ] 00:17:58.553 }' 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.553 14:43:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.121 [2024-11-04 14:43:58.062214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.121 "name": "Existed_Raid", 00:17:59.121 "aliases": [ 00:17:59.121 "cb16e94c-b05b-4186-af16-c26d8f8e1fc5" 00:17:59.121 ], 00:17:59.121 "product_name": "Raid Volume", 00:17:59.121 "block_size": 512, 00:17:59.121 "num_blocks": 126976, 00:17:59.121 "uuid": "cb16e94c-b05b-4186-af16-c26d8f8e1fc5", 00:17:59.121 "assigned_rate_limits": { 00:17:59.121 "rw_ios_per_sec": 0, 00:17:59.121 "rw_mbytes_per_sec": 0, 00:17:59.121 "r_mbytes_per_sec": 0, 00:17:59.121 "w_mbytes_per_sec": 0 00:17:59.121 }, 00:17:59.121 "claimed": false, 00:17:59.121 "zoned": false, 00:17:59.121 "supported_io_types": { 00:17:59.121 "read": true, 00:17:59.121 "write": true, 00:17:59.121 "unmap": false, 00:17:59.121 "flush": false, 00:17:59.121 "reset": true, 00:17:59.121 "nvme_admin": false, 00:17:59.121 "nvme_io": false, 00:17:59.121 "nvme_io_md": false, 00:17:59.121 "write_zeroes": true, 00:17:59.121 "zcopy": false, 00:17:59.121 "get_zone_info": false, 00:17:59.121 "zone_management": false, 00:17:59.121 "zone_append": false, 00:17:59.121 "compare": false, 00:17:59.121 "compare_and_write": false, 00:17:59.121 "abort": false, 00:17:59.121 "seek_hole": false, 00:17:59.121 "seek_data": false, 00:17:59.121 "copy": false, 00:17:59.121 "nvme_iov_md": false 00:17:59.121 }, 00:17:59.121 "driver_specific": { 00:17:59.121 "raid": { 00:17:59.121 "uuid": "cb16e94c-b05b-4186-af16-c26d8f8e1fc5", 00:17:59.121 "strip_size_kb": 64, 00:17:59.121 "state": "online", 00:17:59.121 "raid_level": "raid5f", 00:17:59.121 "superblock": true, 00:17:59.121 "num_base_bdevs": 3, 00:17:59.121 "num_base_bdevs_discovered": 3, 00:17:59.121 "num_base_bdevs_operational": 3, 00:17:59.121 "base_bdevs_list": [ 00:17:59.121 { 00:17:59.121 "name": "BaseBdev1", 00:17:59.121 "uuid": "a7cf2e90-cee4-4049-992b-d93f6d73d33a", 00:17:59.121 "is_configured": true, 00:17:59.121 "data_offset": 2048, 00:17:59.121 "data_size": 63488 00:17:59.121 }, 00:17:59.121 { 00:17:59.121 "name": "BaseBdev2", 00:17:59.121 "uuid": "6d9fdf0f-3102-4fd6-bb90-afab9951789d", 00:17:59.121 "is_configured": true, 00:17:59.121 "data_offset": 2048, 00:17:59.121 "data_size": 63488 00:17:59.121 }, 00:17:59.121 { 00:17:59.121 "name": "BaseBdev3", 00:17:59.121 "uuid": "7c5c43ed-2e91-4db1-aaba-f6a1b4ff8e67", 00:17:59.121 "is_configured": true, 00:17:59.121 "data_offset": 2048, 00:17:59.121 "data_size": 63488 00:17:59.121 } 00:17:59.121 ] 00:17:59.121 } 00:17:59.121 } 00:17:59.121 }' 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:59.121 BaseBdev2 00:17:59.121 BaseBdev3' 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.121 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.379 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.379 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.379 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.379 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.379 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:59.379 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.379 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.379 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.380 [2024-11-04 14:43:58.398101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.380 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.638 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.638 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.638 "name": "Existed_Raid", 00:17:59.638 "uuid": "cb16e94c-b05b-4186-af16-c26d8f8e1fc5", 00:17:59.638 "strip_size_kb": 64, 00:17:59.638 "state": "online", 00:17:59.638 "raid_level": "raid5f", 00:17:59.638 "superblock": true, 00:17:59.638 "num_base_bdevs": 3, 00:17:59.638 "num_base_bdevs_discovered": 2, 00:17:59.638 "num_base_bdevs_operational": 2, 00:17:59.638 "base_bdevs_list": [ 00:17:59.638 { 00:17:59.638 "name": null, 00:17:59.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.638 "is_configured": false, 00:17:59.638 "data_offset": 0, 00:17:59.638 "data_size": 63488 00:17:59.638 }, 00:17:59.638 { 00:17:59.638 "name": "BaseBdev2", 00:17:59.638 "uuid": "6d9fdf0f-3102-4fd6-bb90-afab9951789d", 00:17:59.638 "is_configured": true, 00:17:59.638 "data_offset": 2048, 00:17:59.638 "data_size": 63488 00:17:59.638 }, 00:17:59.638 { 00:17:59.638 "name": "BaseBdev3", 00:17:59.638 "uuid": "7c5c43ed-2e91-4db1-aaba-f6a1b4ff8e67", 00:17:59.638 "is_configured": true, 00:17:59.638 "data_offset": 2048, 00:17:59.638 "data_size": 63488 00:17:59.638 } 00:17:59.638 ] 00:17:59.638 }' 00:17:59.638 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.638 14:43:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.906 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:59.906 14:43:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:59.906 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.906 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.906 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.906 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:59.906 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.165 [2024-11-04 14:43:59.056485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:00.165 [2024-11-04 14:43:59.056684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.165 [2024-11-04 14:43:59.141259] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.165 [2024-11-04 14:43:59.197303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:00.165 [2024-11-04 14:43:59.197485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:00.165 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.424 BaseBdev2 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.424 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.424 [ 00:18:00.424 { 00:18:00.424 "name": "BaseBdev2", 00:18:00.424 "aliases": [ 00:18:00.424 "f571ece0-3992-4a19-9bf2-52f8ba45ce2f" 00:18:00.424 ], 00:18:00.424 "product_name": "Malloc disk", 00:18:00.424 "block_size": 512, 00:18:00.424 "num_blocks": 65536, 00:18:00.424 "uuid": "f571ece0-3992-4a19-9bf2-52f8ba45ce2f", 00:18:00.424 "assigned_rate_limits": { 00:18:00.424 "rw_ios_per_sec": 0, 00:18:00.424 "rw_mbytes_per_sec": 0, 00:18:00.424 "r_mbytes_per_sec": 0, 00:18:00.424 "w_mbytes_per_sec": 0 00:18:00.424 }, 00:18:00.424 "claimed": false, 00:18:00.424 "zoned": false, 00:18:00.424 "supported_io_types": { 00:18:00.424 "read": true, 00:18:00.424 "write": true, 00:18:00.424 "unmap": true, 00:18:00.424 "flush": true, 00:18:00.425 "reset": true, 00:18:00.425 "nvme_admin": false, 00:18:00.425 "nvme_io": false, 00:18:00.425 "nvme_io_md": false, 00:18:00.425 "write_zeroes": true, 00:18:00.425 "zcopy": true, 00:18:00.425 "get_zone_info": false, 00:18:00.425 "zone_management": false, 00:18:00.425 "zone_append": false, 00:18:00.425 "compare": false, 00:18:00.425 "compare_and_write": false, 00:18:00.425 "abort": true, 00:18:00.425 "seek_hole": false, 00:18:00.425 "seek_data": false, 00:18:00.425 "copy": true, 00:18:00.425 "nvme_iov_md": false 00:18:00.425 }, 00:18:00.425 "memory_domains": [ 00:18:00.425 { 00:18:00.425 "dma_device_id": "system", 00:18:00.425 "dma_device_type": 1 00:18:00.425 }, 00:18:00.425 { 00:18:00.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.425 "dma_device_type": 2 00:18:00.425 } 00:18:00.425 ], 00:18:00.425 "driver_specific": {} 00:18:00.425 } 00:18:00.425 ] 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.425 BaseBdev3 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.425 [ 00:18:00.425 { 00:18:00.425 "name": "BaseBdev3", 00:18:00.425 "aliases": [ 00:18:00.425 "3f2b7d3a-022a-400a-8d86-525d7b83ed79" 00:18:00.425 ], 00:18:00.425 "product_name": "Malloc disk", 00:18:00.425 "block_size": 512, 00:18:00.425 "num_blocks": 65536, 00:18:00.425 "uuid": "3f2b7d3a-022a-400a-8d86-525d7b83ed79", 00:18:00.425 "assigned_rate_limits": { 00:18:00.425 "rw_ios_per_sec": 0, 00:18:00.425 "rw_mbytes_per_sec": 0, 00:18:00.425 "r_mbytes_per_sec": 0, 00:18:00.425 "w_mbytes_per_sec": 0 00:18:00.425 }, 00:18:00.425 "claimed": false, 00:18:00.425 "zoned": false, 00:18:00.425 "supported_io_types": { 00:18:00.425 "read": true, 00:18:00.425 "write": true, 00:18:00.425 "unmap": true, 00:18:00.425 "flush": true, 00:18:00.425 "reset": true, 00:18:00.425 "nvme_admin": false, 00:18:00.425 "nvme_io": false, 00:18:00.425 "nvme_io_md": false, 00:18:00.425 "write_zeroes": true, 00:18:00.425 "zcopy": true, 00:18:00.425 "get_zone_info": false, 00:18:00.425 "zone_management": false, 00:18:00.425 "zone_append": false, 00:18:00.425 "compare": false, 00:18:00.425 "compare_and_write": false, 00:18:00.425 "abort": true, 00:18:00.425 "seek_hole": false, 00:18:00.425 "seek_data": false, 00:18:00.425 "copy": true, 00:18:00.425 "nvme_iov_md": false 00:18:00.425 }, 00:18:00.425 "memory_domains": [ 00:18:00.425 { 00:18:00.425 "dma_device_id": "system", 00:18:00.425 "dma_device_type": 1 00:18:00.425 }, 00:18:00.425 { 00:18:00.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.425 "dma_device_type": 2 00:18:00.425 } 00:18:00.425 ], 00:18:00.425 "driver_specific": {} 00:18:00.425 } 00:18:00.425 ] 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.425 [2024-11-04 14:43:59.492371] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:00.425 [2024-11-04 14:43:59.492444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:00.425 [2024-11-04 14:43:59.492493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:00.425 [2024-11-04 14:43:59.495033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.425 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.684 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.684 "name": "Existed_Raid", 00:18:00.684 "uuid": "36974bfc-3c53-4127-9b0b-95799fd5e642", 00:18:00.684 "strip_size_kb": 64, 00:18:00.684 "state": "configuring", 00:18:00.684 "raid_level": "raid5f", 00:18:00.684 "superblock": true, 00:18:00.684 "num_base_bdevs": 3, 00:18:00.684 "num_base_bdevs_discovered": 2, 00:18:00.684 "num_base_bdevs_operational": 3, 00:18:00.684 "base_bdevs_list": [ 00:18:00.684 { 00:18:00.684 "name": "BaseBdev1", 00:18:00.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.684 "is_configured": false, 00:18:00.684 "data_offset": 0, 00:18:00.684 "data_size": 0 00:18:00.684 }, 00:18:00.684 { 00:18:00.684 "name": "BaseBdev2", 00:18:00.684 "uuid": "f571ece0-3992-4a19-9bf2-52f8ba45ce2f", 00:18:00.684 "is_configured": true, 00:18:00.684 "data_offset": 2048, 00:18:00.684 "data_size": 63488 00:18:00.684 }, 00:18:00.684 { 00:18:00.684 "name": "BaseBdev3", 00:18:00.684 "uuid": "3f2b7d3a-022a-400a-8d86-525d7b83ed79", 00:18:00.684 "is_configured": true, 00:18:00.684 "data_offset": 2048, 00:18:00.684 "data_size": 63488 00:18:00.684 } 00:18:00.684 ] 00:18:00.684 }' 00:18:00.684 14:43:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.684 14:43:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.943 [2024-11-04 14:44:00.008522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.943 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.201 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.201 "name": "Existed_Raid", 00:18:01.201 "uuid": "36974bfc-3c53-4127-9b0b-95799fd5e642", 00:18:01.201 "strip_size_kb": 64, 00:18:01.201 "state": "configuring", 00:18:01.201 "raid_level": "raid5f", 00:18:01.201 "superblock": true, 00:18:01.201 "num_base_bdevs": 3, 00:18:01.201 "num_base_bdevs_discovered": 1, 00:18:01.201 "num_base_bdevs_operational": 3, 00:18:01.201 "base_bdevs_list": [ 00:18:01.201 { 00:18:01.201 "name": "BaseBdev1", 00:18:01.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.201 "is_configured": false, 00:18:01.201 "data_offset": 0, 00:18:01.201 "data_size": 0 00:18:01.201 }, 00:18:01.201 { 00:18:01.201 "name": null, 00:18:01.201 "uuid": "f571ece0-3992-4a19-9bf2-52f8ba45ce2f", 00:18:01.201 "is_configured": false, 00:18:01.201 "data_offset": 0, 00:18:01.201 "data_size": 63488 00:18:01.201 }, 00:18:01.201 { 00:18:01.201 "name": "BaseBdev3", 00:18:01.201 "uuid": "3f2b7d3a-022a-400a-8d86-525d7b83ed79", 00:18:01.201 "is_configured": true, 00:18:01.201 "data_offset": 2048, 00:18:01.201 "data_size": 63488 00:18:01.201 } 00:18:01.201 ] 00:18:01.201 }' 00:18:01.201 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.201 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.459 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.460 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:01.460 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.460 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.460 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.718 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:01.718 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:01.718 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.718 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.718 [2024-11-04 14:44:00.631901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.718 BaseBdev1 00:18:01.718 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.718 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.719 [ 00:18:01.719 { 00:18:01.719 "name": "BaseBdev1", 00:18:01.719 "aliases": [ 00:18:01.719 "a96d40b9-b901-49e8-9be8-7b913054792c" 00:18:01.719 ], 00:18:01.719 "product_name": "Malloc disk", 00:18:01.719 "block_size": 512, 00:18:01.719 "num_blocks": 65536, 00:18:01.719 "uuid": "a96d40b9-b901-49e8-9be8-7b913054792c", 00:18:01.719 "assigned_rate_limits": { 00:18:01.719 "rw_ios_per_sec": 0, 00:18:01.719 "rw_mbytes_per_sec": 0, 00:18:01.719 "r_mbytes_per_sec": 0, 00:18:01.719 "w_mbytes_per_sec": 0 00:18:01.719 }, 00:18:01.719 "claimed": true, 00:18:01.719 "claim_type": "exclusive_write", 00:18:01.719 "zoned": false, 00:18:01.719 "supported_io_types": { 00:18:01.719 "read": true, 00:18:01.719 "write": true, 00:18:01.719 "unmap": true, 00:18:01.719 "flush": true, 00:18:01.719 "reset": true, 00:18:01.719 "nvme_admin": false, 00:18:01.719 "nvme_io": false, 00:18:01.719 "nvme_io_md": false, 00:18:01.719 "write_zeroes": true, 00:18:01.719 "zcopy": true, 00:18:01.719 "get_zone_info": false, 00:18:01.719 "zone_management": false, 00:18:01.719 "zone_append": false, 00:18:01.719 "compare": false, 00:18:01.719 "compare_and_write": false, 00:18:01.719 "abort": true, 00:18:01.719 "seek_hole": false, 00:18:01.719 "seek_data": false, 00:18:01.719 "copy": true, 00:18:01.719 "nvme_iov_md": false 00:18:01.719 }, 00:18:01.719 "memory_domains": [ 00:18:01.719 { 00:18:01.719 "dma_device_id": "system", 00:18:01.719 "dma_device_type": 1 00:18:01.719 }, 00:18:01.719 { 00:18:01.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.719 "dma_device_type": 2 00:18:01.719 } 00:18:01.719 ], 00:18:01.719 "driver_specific": {} 00:18:01.719 } 00:18:01.719 ] 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.719 "name": "Existed_Raid", 00:18:01.719 "uuid": "36974bfc-3c53-4127-9b0b-95799fd5e642", 00:18:01.719 "strip_size_kb": 64, 00:18:01.719 "state": "configuring", 00:18:01.719 "raid_level": "raid5f", 00:18:01.719 "superblock": true, 00:18:01.719 "num_base_bdevs": 3, 00:18:01.719 "num_base_bdevs_discovered": 2, 00:18:01.719 "num_base_bdevs_operational": 3, 00:18:01.719 "base_bdevs_list": [ 00:18:01.719 { 00:18:01.719 "name": "BaseBdev1", 00:18:01.719 "uuid": "a96d40b9-b901-49e8-9be8-7b913054792c", 00:18:01.719 "is_configured": true, 00:18:01.719 "data_offset": 2048, 00:18:01.719 "data_size": 63488 00:18:01.719 }, 00:18:01.719 { 00:18:01.719 "name": null, 00:18:01.719 "uuid": "f571ece0-3992-4a19-9bf2-52f8ba45ce2f", 00:18:01.719 "is_configured": false, 00:18:01.719 "data_offset": 0, 00:18:01.719 "data_size": 63488 00:18:01.719 }, 00:18:01.719 { 00:18:01.719 "name": "BaseBdev3", 00:18:01.719 "uuid": "3f2b7d3a-022a-400a-8d86-525d7b83ed79", 00:18:01.719 "is_configured": true, 00:18:01.719 "data_offset": 2048, 00:18:01.719 "data_size": 63488 00:18:01.719 } 00:18:01.719 ] 00:18:01.719 }' 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.719 14:44:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.286 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.286 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.286 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.287 [2024-11-04 14:44:01.240203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.287 "name": "Existed_Raid", 00:18:02.287 "uuid": "36974bfc-3c53-4127-9b0b-95799fd5e642", 00:18:02.287 "strip_size_kb": 64, 00:18:02.287 "state": "configuring", 00:18:02.287 "raid_level": "raid5f", 00:18:02.287 "superblock": true, 00:18:02.287 "num_base_bdevs": 3, 00:18:02.287 "num_base_bdevs_discovered": 1, 00:18:02.287 "num_base_bdevs_operational": 3, 00:18:02.287 "base_bdevs_list": [ 00:18:02.287 { 00:18:02.287 "name": "BaseBdev1", 00:18:02.287 "uuid": "a96d40b9-b901-49e8-9be8-7b913054792c", 00:18:02.287 "is_configured": true, 00:18:02.287 "data_offset": 2048, 00:18:02.287 "data_size": 63488 00:18:02.287 }, 00:18:02.287 { 00:18:02.287 "name": null, 00:18:02.287 "uuid": "f571ece0-3992-4a19-9bf2-52f8ba45ce2f", 00:18:02.287 "is_configured": false, 00:18:02.287 "data_offset": 0, 00:18:02.287 "data_size": 63488 00:18:02.287 }, 00:18:02.287 { 00:18:02.287 "name": null, 00:18:02.287 "uuid": "3f2b7d3a-022a-400a-8d86-525d7b83ed79", 00:18:02.287 "is_configured": false, 00:18:02.287 "data_offset": 0, 00:18:02.287 "data_size": 63488 00:18:02.287 } 00:18:02.287 ] 00:18:02.287 }' 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.287 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.854 [2024-11-04 14:44:01.820396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.854 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.854 "name": "Existed_Raid", 00:18:02.854 "uuid": "36974bfc-3c53-4127-9b0b-95799fd5e642", 00:18:02.854 "strip_size_kb": 64, 00:18:02.854 "state": "configuring", 00:18:02.854 "raid_level": "raid5f", 00:18:02.854 "superblock": true, 00:18:02.854 "num_base_bdevs": 3, 00:18:02.854 "num_base_bdevs_discovered": 2, 00:18:02.854 "num_base_bdevs_operational": 3, 00:18:02.854 "base_bdevs_list": [ 00:18:02.854 { 00:18:02.854 "name": "BaseBdev1", 00:18:02.854 "uuid": "a96d40b9-b901-49e8-9be8-7b913054792c", 00:18:02.854 "is_configured": true, 00:18:02.854 "data_offset": 2048, 00:18:02.854 "data_size": 63488 00:18:02.854 }, 00:18:02.854 { 00:18:02.854 "name": null, 00:18:02.854 "uuid": "f571ece0-3992-4a19-9bf2-52f8ba45ce2f", 00:18:02.854 "is_configured": false, 00:18:02.854 "data_offset": 0, 00:18:02.854 "data_size": 63488 00:18:02.854 }, 00:18:02.854 { 00:18:02.854 "name": "BaseBdev3", 00:18:02.854 "uuid": "3f2b7d3a-022a-400a-8d86-525d7b83ed79", 00:18:02.855 "is_configured": true, 00:18:02.855 "data_offset": 2048, 00:18:02.855 "data_size": 63488 00:18:02.855 } 00:18:02.855 ] 00:18:02.855 }' 00:18:02.855 14:44:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.855 14:44:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.424 [2024-11-04 14:44:02.440614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.424 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.684 14:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.684 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.684 "name": "Existed_Raid", 00:18:03.684 "uuid": "36974bfc-3c53-4127-9b0b-95799fd5e642", 00:18:03.684 "strip_size_kb": 64, 00:18:03.684 "state": "configuring", 00:18:03.684 "raid_level": "raid5f", 00:18:03.684 "superblock": true, 00:18:03.684 "num_base_bdevs": 3, 00:18:03.684 "num_base_bdevs_discovered": 1, 00:18:03.684 "num_base_bdevs_operational": 3, 00:18:03.684 "base_bdevs_list": [ 00:18:03.684 { 00:18:03.684 "name": null, 00:18:03.684 "uuid": "a96d40b9-b901-49e8-9be8-7b913054792c", 00:18:03.684 "is_configured": false, 00:18:03.684 "data_offset": 0, 00:18:03.684 "data_size": 63488 00:18:03.684 }, 00:18:03.684 { 00:18:03.684 "name": null, 00:18:03.684 "uuid": "f571ece0-3992-4a19-9bf2-52f8ba45ce2f", 00:18:03.684 "is_configured": false, 00:18:03.684 "data_offset": 0, 00:18:03.684 "data_size": 63488 00:18:03.684 }, 00:18:03.684 { 00:18:03.684 "name": "BaseBdev3", 00:18:03.684 "uuid": "3f2b7d3a-022a-400a-8d86-525d7b83ed79", 00:18:03.684 "is_configured": true, 00:18:03.684 "data_offset": 2048, 00:18:03.684 "data_size": 63488 00:18:03.684 } 00:18:03.684 ] 00:18:03.684 }' 00:18:03.684 14:44:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.684 14:44:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.942 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.942 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:03.942 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.942 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.200 [2024-11-04 14:44:03.105277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.200 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.201 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.201 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.201 "name": "Existed_Raid", 00:18:04.201 "uuid": "36974bfc-3c53-4127-9b0b-95799fd5e642", 00:18:04.201 "strip_size_kb": 64, 00:18:04.201 "state": "configuring", 00:18:04.201 "raid_level": "raid5f", 00:18:04.201 "superblock": true, 00:18:04.201 "num_base_bdevs": 3, 00:18:04.201 "num_base_bdevs_discovered": 2, 00:18:04.201 "num_base_bdevs_operational": 3, 00:18:04.201 "base_bdevs_list": [ 00:18:04.201 { 00:18:04.201 "name": null, 00:18:04.201 "uuid": "a96d40b9-b901-49e8-9be8-7b913054792c", 00:18:04.201 "is_configured": false, 00:18:04.201 "data_offset": 0, 00:18:04.201 "data_size": 63488 00:18:04.201 }, 00:18:04.201 { 00:18:04.201 "name": "BaseBdev2", 00:18:04.201 "uuid": "f571ece0-3992-4a19-9bf2-52f8ba45ce2f", 00:18:04.201 "is_configured": true, 00:18:04.201 "data_offset": 2048, 00:18:04.201 "data_size": 63488 00:18:04.201 }, 00:18:04.201 { 00:18:04.201 "name": "BaseBdev3", 00:18:04.201 "uuid": "3f2b7d3a-022a-400a-8d86-525d7b83ed79", 00:18:04.201 "is_configured": true, 00:18:04.201 "data_offset": 2048, 00:18:04.201 "data_size": 63488 00:18:04.201 } 00:18:04.201 ] 00:18:04.201 }' 00:18:04.201 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.201 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a96d40b9-b901-49e8-9be8-7b913054792c 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.772 [2024-11-04 14:44:03.815710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:04.772 [2024-11-04 14:44:03.816236] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:04.772 [2024-11-04 14:44:03.816268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:04.772 NewBaseBdev 00:18:04.772 [2024-11-04 14:44:03.816583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.772 [2024-11-04 14:44:03.821528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:04.772 [2024-11-04 14:44:03.821552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:04.772 [2024-11-04 14:44:03.821856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.772 [ 00:18:04.772 { 00:18:04.772 "name": "NewBaseBdev", 00:18:04.772 "aliases": [ 00:18:04.772 "a96d40b9-b901-49e8-9be8-7b913054792c" 00:18:04.772 ], 00:18:04.772 "product_name": "Malloc disk", 00:18:04.772 "block_size": 512, 00:18:04.772 "num_blocks": 65536, 00:18:04.772 "uuid": "a96d40b9-b901-49e8-9be8-7b913054792c", 00:18:04.772 "assigned_rate_limits": { 00:18:04.772 "rw_ios_per_sec": 0, 00:18:04.772 "rw_mbytes_per_sec": 0, 00:18:04.772 "r_mbytes_per_sec": 0, 00:18:04.772 "w_mbytes_per_sec": 0 00:18:04.772 }, 00:18:04.772 "claimed": true, 00:18:04.772 "claim_type": "exclusive_write", 00:18:04.772 "zoned": false, 00:18:04.772 "supported_io_types": { 00:18:04.772 "read": true, 00:18:04.772 "write": true, 00:18:04.772 "unmap": true, 00:18:04.772 "flush": true, 00:18:04.772 "reset": true, 00:18:04.772 "nvme_admin": false, 00:18:04.772 "nvme_io": false, 00:18:04.772 "nvme_io_md": false, 00:18:04.772 "write_zeroes": true, 00:18:04.772 "zcopy": true, 00:18:04.772 "get_zone_info": false, 00:18:04.772 "zone_management": false, 00:18:04.772 "zone_append": false, 00:18:04.772 "compare": false, 00:18:04.772 "compare_and_write": false, 00:18:04.772 "abort": true, 00:18:04.772 "seek_hole": false, 00:18:04.772 "seek_data": false, 00:18:04.772 "copy": true, 00:18:04.772 "nvme_iov_md": false 00:18:04.772 }, 00:18:04.772 "memory_domains": [ 00:18:04.772 { 00:18:04.772 "dma_device_id": "system", 00:18:04.772 "dma_device_type": 1 00:18:04.772 }, 00:18:04.772 { 00:18:04.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.772 "dma_device_type": 2 00:18:04.772 } 00:18:04.772 ], 00:18:04.772 "driver_specific": {} 00:18:04.772 } 00:18:04.772 ] 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.772 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.032 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.032 "name": "Existed_Raid", 00:18:05.032 "uuid": "36974bfc-3c53-4127-9b0b-95799fd5e642", 00:18:05.032 "strip_size_kb": 64, 00:18:05.032 "state": "online", 00:18:05.032 "raid_level": "raid5f", 00:18:05.032 "superblock": true, 00:18:05.032 "num_base_bdevs": 3, 00:18:05.032 "num_base_bdevs_discovered": 3, 00:18:05.032 "num_base_bdevs_operational": 3, 00:18:05.032 "base_bdevs_list": [ 00:18:05.032 { 00:18:05.032 "name": "NewBaseBdev", 00:18:05.032 "uuid": "a96d40b9-b901-49e8-9be8-7b913054792c", 00:18:05.032 "is_configured": true, 00:18:05.032 "data_offset": 2048, 00:18:05.032 "data_size": 63488 00:18:05.032 }, 00:18:05.032 { 00:18:05.032 "name": "BaseBdev2", 00:18:05.032 "uuid": "f571ece0-3992-4a19-9bf2-52f8ba45ce2f", 00:18:05.032 "is_configured": true, 00:18:05.032 "data_offset": 2048, 00:18:05.032 "data_size": 63488 00:18:05.032 }, 00:18:05.032 { 00:18:05.032 "name": "BaseBdev3", 00:18:05.032 "uuid": "3f2b7d3a-022a-400a-8d86-525d7b83ed79", 00:18:05.032 "is_configured": true, 00:18:05.032 "data_offset": 2048, 00:18:05.032 "data_size": 63488 00:18:05.032 } 00:18:05.032 ] 00:18:05.032 }' 00:18:05.032 14:44:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.032 14:44:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:05.291 [2024-11-04 14:44:04.376136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.291 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:05.550 "name": "Existed_Raid", 00:18:05.550 "aliases": [ 00:18:05.550 "36974bfc-3c53-4127-9b0b-95799fd5e642" 00:18:05.550 ], 00:18:05.550 "product_name": "Raid Volume", 00:18:05.550 "block_size": 512, 00:18:05.550 "num_blocks": 126976, 00:18:05.550 "uuid": "36974bfc-3c53-4127-9b0b-95799fd5e642", 00:18:05.550 "assigned_rate_limits": { 00:18:05.550 "rw_ios_per_sec": 0, 00:18:05.550 "rw_mbytes_per_sec": 0, 00:18:05.550 "r_mbytes_per_sec": 0, 00:18:05.550 "w_mbytes_per_sec": 0 00:18:05.550 }, 00:18:05.550 "claimed": false, 00:18:05.550 "zoned": false, 00:18:05.550 "supported_io_types": { 00:18:05.550 "read": true, 00:18:05.550 "write": true, 00:18:05.550 "unmap": false, 00:18:05.550 "flush": false, 00:18:05.550 "reset": true, 00:18:05.550 "nvme_admin": false, 00:18:05.550 "nvme_io": false, 00:18:05.550 "nvme_io_md": false, 00:18:05.550 "write_zeroes": true, 00:18:05.550 "zcopy": false, 00:18:05.550 "get_zone_info": false, 00:18:05.550 "zone_management": false, 00:18:05.550 "zone_append": false, 00:18:05.550 "compare": false, 00:18:05.550 "compare_and_write": false, 00:18:05.550 "abort": false, 00:18:05.550 "seek_hole": false, 00:18:05.550 "seek_data": false, 00:18:05.550 "copy": false, 00:18:05.550 "nvme_iov_md": false 00:18:05.550 }, 00:18:05.550 "driver_specific": { 00:18:05.550 "raid": { 00:18:05.550 "uuid": "36974bfc-3c53-4127-9b0b-95799fd5e642", 00:18:05.550 "strip_size_kb": 64, 00:18:05.550 "state": "online", 00:18:05.550 "raid_level": "raid5f", 00:18:05.550 "superblock": true, 00:18:05.550 "num_base_bdevs": 3, 00:18:05.550 "num_base_bdevs_discovered": 3, 00:18:05.550 "num_base_bdevs_operational": 3, 00:18:05.550 "base_bdevs_list": [ 00:18:05.550 { 00:18:05.550 "name": "NewBaseBdev", 00:18:05.550 "uuid": "a96d40b9-b901-49e8-9be8-7b913054792c", 00:18:05.550 "is_configured": true, 00:18:05.550 "data_offset": 2048, 00:18:05.550 "data_size": 63488 00:18:05.550 }, 00:18:05.550 { 00:18:05.550 "name": "BaseBdev2", 00:18:05.550 "uuid": "f571ece0-3992-4a19-9bf2-52f8ba45ce2f", 00:18:05.550 "is_configured": true, 00:18:05.550 "data_offset": 2048, 00:18:05.550 "data_size": 63488 00:18:05.550 }, 00:18:05.550 { 00:18:05.550 "name": "BaseBdev3", 00:18:05.550 "uuid": "3f2b7d3a-022a-400a-8d86-525d7b83ed79", 00:18:05.550 "is_configured": true, 00:18:05.550 "data_offset": 2048, 00:18:05.550 "data_size": 63488 00:18:05.550 } 00:18:05.550 ] 00:18:05.550 } 00:18:05.550 } 00:18:05.550 }' 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:05.550 BaseBdev2 00:18:05.550 BaseBdev3' 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.550 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:05.551 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.809 [2024-11-04 14:44:04.707920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:05.809 [2024-11-04 14:44:04.707984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.809 [2024-11-04 14:44:04.708090] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.809 [2024-11-04 14:44:04.708467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.809 [2024-11-04 14:44:04.708488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80859 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80859 ']' 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80859 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80859 00:18:05.809 killing process with pid 80859 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80859' 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80859 00:18:05.809 [2024-11-04 14:44:04.756977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.809 14:44:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80859 00:18:06.068 [2024-11-04 14:44:05.029133] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.004 14:44:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:07.004 00:18:07.004 real 0m12.079s 00:18:07.004 user 0m20.128s 00:18:07.004 sys 0m1.657s 00:18:07.004 14:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:07.004 14:44:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.004 ************************************ 00:18:07.004 END TEST raid5f_state_function_test_sb 00:18:07.004 ************************************ 00:18:07.262 14:44:06 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:18:07.262 14:44:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:07.262 14:44:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:07.262 14:44:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.262 ************************************ 00:18:07.262 START TEST raid5f_superblock_test 00:18:07.262 ************************************ 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81492 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81492 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81492 ']' 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:07.262 14:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.262 [2024-11-04 14:44:06.260840] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:18:07.262 [2024-11-04 14:44:06.261240] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81492 ] 00:18:07.520 [2024-11-04 14:44:06.449121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.520 [2024-11-04 14:44:06.577857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.779 [2024-11-04 14:44:06.787475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.779 [2024-11-04 14:44:06.787527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.347 malloc1 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.347 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.347 [2024-11-04 14:44:07.319165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:08.347 [2024-11-04 14:44:07.319390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.348 [2024-11-04 14:44:07.319465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:08.348 [2024-11-04 14:44:07.319692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.348 [2024-11-04 14:44:07.322538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.348 [2024-11-04 14:44:07.322740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:08.348 pt1 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.348 malloc2 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.348 [2024-11-04 14:44:07.376623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:08.348 [2024-11-04 14:44:07.376721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.348 [2024-11-04 14:44:07.376752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:08.348 [2024-11-04 14:44:07.376767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.348 [2024-11-04 14:44:07.379573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.348 [2024-11-04 14:44:07.379615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:08.348 pt2 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.348 malloc3 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.348 [2024-11-04 14:44:07.443353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:08.348 [2024-11-04 14:44:07.443420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.348 [2024-11-04 14:44:07.443451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:08.348 [2024-11-04 14:44:07.443467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.348 [2024-11-04 14:44:07.446233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.348 [2024-11-04 14:44:07.446412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:08.348 pt3 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.348 [2024-11-04 14:44:07.455432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:08.348 [2024-11-04 14:44:07.457992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:08.348 [2024-11-04 14:44:07.458091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:08.348 [2024-11-04 14:44:07.458309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:08.348 [2024-11-04 14:44:07.458339] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:08.348 [2024-11-04 14:44:07.458668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:08.348 [2024-11-04 14:44:07.464027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:08.348 [2024-11-04 14:44:07.464171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:08.348 [2024-11-04 14:44:07.464430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.348 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.607 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.607 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.607 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.607 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.607 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.607 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.607 "name": "raid_bdev1", 00:18:08.607 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:08.607 "strip_size_kb": 64, 00:18:08.607 "state": "online", 00:18:08.607 "raid_level": "raid5f", 00:18:08.607 "superblock": true, 00:18:08.607 "num_base_bdevs": 3, 00:18:08.607 "num_base_bdevs_discovered": 3, 00:18:08.607 "num_base_bdevs_operational": 3, 00:18:08.607 "base_bdevs_list": [ 00:18:08.607 { 00:18:08.607 "name": "pt1", 00:18:08.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:08.607 "is_configured": true, 00:18:08.607 "data_offset": 2048, 00:18:08.607 "data_size": 63488 00:18:08.607 }, 00:18:08.607 { 00:18:08.607 "name": "pt2", 00:18:08.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.607 "is_configured": true, 00:18:08.607 "data_offset": 2048, 00:18:08.607 "data_size": 63488 00:18:08.607 }, 00:18:08.607 { 00:18:08.607 "name": "pt3", 00:18:08.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:08.607 "is_configured": true, 00:18:08.607 "data_offset": 2048, 00:18:08.607 "data_size": 63488 00:18:08.607 } 00:18:08.607 ] 00:18:08.607 }' 00:18:08.607 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.607 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.865 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:08.865 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:08.865 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:08.865 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:08.866 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:08.866 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:08.866 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.866 14:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:08.866 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.866 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.866 [2024-11-04 14:44:07.970510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.124 14:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.124 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:09.124 "name": "raid_bdev1", 00:18:09.124 "aliases": [ 00:18:09.124 "b6667626-a255-44b8-a7c5-cdaa58d5512f" 00:18:09.124 ], 00:18:09.124 "product_name": "Raid Volume", 00:18:09.124 "block_size": 512, 00:18:09.124 "num_blocks": 126976, 00:18:09.124 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:09.124 "assigned_rate_limits": { 00:18:09.124 "rw_ios_per_sec": 0, 00:18:09.124 "rw_mbytes_per_sec": 0, 00:18:09.124 "r_mbytes_per_sec": 0, 00:18:09.124 "w_mbytes_per_sec": 0 00:18:09.124 }, 00:18:09.124 "claimed": false, 00:18:09.124 "zoned": false, 00:18:09.124 "supported_io_types": { 00:18:09.124 "read": true, 00:18:09.124 "write": true, 00:18:09.124 "unmap": false, 00:18:09.124 "flush": false, 00:18:09.124 "reset": true, 00:18:09.124 "nvme_admin": false, 00:18:09.124 "nvme_io": false, 00:18:09.124 "nvme_io_md": false, 00:18:09.124 "write_zeroes": true, 00:18:09.124 "zcopy": false, 00:18:09.124 "get_zone_info": false, 00:18:09.124 "zone_management": false, 00:18:09.124 "zone_append": false, 00:18:09.124 "compare": false, 00:18:09.124 "compare_and_write": false, 00:18:09.124 "abort": false, 00:18:09.124 "seek_hole": false, 00:18:09.124 "seek_data": false, 00:18:09.124 "copy": false, 00:18:09.124 "nvme_iov_md": false 00:18:09.124 }, 00:18:09.124 "driver_specific": { 00:18:09.124 "raid": { 00:18:09.124 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:09.124 "strip_size_kb": 64, 00:18:09.124 "state": "online", 00:18:09.124 "raid_level": "raid5f", 00:18:09.125 "superblock": true, 00:18:09.125 "num_base_bdevs": 3, 00:18:09.125 "num_base_bdevs_discovered": 3, 00:18:09.125 "num_base_bdevs_operational": 3, 00:18:09.125 "base_bdevs_list": [ 00:18:09.125 { 00:18:09.125 "name": "pt1", 00:18:09.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.125 "is_configured": true, 00:18:09.125 "data_offset": 2048, 00:18:09.125 "data_size": 63488 00:18:09.125 }, 00:18:09.125 { 00:18:09.125 "name": "pt2", 00:18:09.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.125 "is_configured": true, 00:18:09.125 "data_offset": 2048, 00:18:09.125 "data_size": 63488 00:18:09.125 }, 00:18:09.125 { 00:18:09.125 "name": "pt3", 00:18:09.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:09.125 "is_configured": true, 00:18:09.125 "data_offset": 2048, 00:18:09.125 "data_size": 63488 00:18:09.125 } 00:18:09.125 ] 00:18:09.125 } 00:18:09.125 } 00:18:09.125 }' 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:09.125 pt2 00:18:09.125 pt3' 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.125 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.444 [2024-11-04 14:44:08.278546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b6667626-a255-44b8-a7c5-cdaa58d5512f 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b6667626-a255-44b8-a7c5-cdaa58d5512f ']' 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.444 [2024-11-04 14:44:08.326310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.444 [2024-11-04 14:44:08.326392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.444 [2024-11-04 14:44:08.326475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.444 [2024-11-04 14:44:08.326566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.444 [2024-11-04 14:44:08.326581] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.444 [2024-11-04 14:44:08.482428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:09.444 [2024-11-04 14:44:08.484954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:09.444 [2024-11-04 14:44:08.485169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:09.444 [2024-11-04 14:44:08.485290] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:09.444 [2024-11-04 14:44:08.485636] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:09.444 [2024-11-04 14:44:08.485900] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:09.444 [2024-11-04 14:44:08.486200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.444 [2024-11-04 14:44:08.486248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:09.444 request: 00:18:09.444 { 00:18:09.444 "name": "raid_bdev1", 00:18:09.444 "raid_level": "raid5f", 00:18:09.444 "base_bdevs": [ 00:18:09.444 "malloc1", 00:18:09.444 "malloc2", 00:18:09.444 "malloc3" 00:18:09.444 ], 00:18:09.444 "strip_size_kb": 64, 00:18:09.444 "superblock": false, 00:18:09.444 "method": "bdev_raid_create", 00:18:09.444 "req_id": 1 00:18:09.444 } 00:18:09.444 Got JSON-RPC error response 00:18:09.444 response: 00:18:09.444 { 00:18:09.444 "code": -17, 00:18:09.444 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:09.444 } 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.444 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.704 [2024-11-04 14:44:08.554680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.704 [2024-11-04 14:44:08.554753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.704 [2024-11-04 14:44:08.554780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:09.704 [2024-11-04 14:44:08.554794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.704 [2024-11-04 14:44:08.557747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.704 [2024-11-04 14:44:08.557976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.704 [2024-11-04 14:44:08.558090] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:09.704 [2024-11-04 14:44:08.558152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:09.704 pt1 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.704 "name": "raid_bdev1", 00:18:09.704 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:09.704 "strip_size_kb": 64, 00:18:09.704 "state": "configuring", 00:18:09.704 "raid_level": "raid5f", 00:18:09.704 "superblock": true, 00:18:09.704 "num_base_bdevs": 3, 00:18:09.704 "num_base_bdevs_discovered": 1, 00:18:09.704 "num_base_bdevs_operational": 3, 00:18:09.704 "base_bdevs_list": [ 00:18:09.704 { 00:18:09.704 "name": "pt1", 00:18:09.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.704 "is_configured": true, 00:18:09.704 "data_offset": 2048, 00:18:09.704 "data_size": 63488 00:18:09.704 }, 00:18:09.704 { 00:18:09.704 "name": null, 00:18:09.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.704 "is_configured": false, 00:18:09.704 "data_offset": 2048, 00:18:09.704 "data_size": 63488 00:18:09.704 }, 00:18:09.704 { 00:18:09.704 "name": null, 00:18:09.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:09.704 "is_configured": false, 00:18:09.704 "data_offset": 2048, 00:18:09.704 "data_size": 63488 00:18:09.704 } 00:18:09.704 ] 00:18:09.704 }' 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.704 14:44:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.271 [2024-11-04 14:44:09.090872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.271 [2024-11-04 14:44:09.091085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.271 [2024-11-04 14:44:09.091171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:10.271 [2024-11-04 14:44:09.091193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.271 [2024-11-04 14:44:09.091753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.271 [2024-11-04 14:44:09.091795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.271 [2024-11-04 14:44:09.091912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:10.271 [2024-11-04 14:44:09.091958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.271 pt2 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.271 [2024-11-04 14:44:09.098884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.271 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.272 "name": "raid_bdev1", 00:18:10.272 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:10.272 "strip_size_kb": 64, 00:18:10.272 "state": "configuring", 00:18:10.272 "raid_level": "raid5f", 00:18:10.272 "superblock": true, 00:18:10.272 "num_base_bdevs": 3, 00:18:10.272 "num_base_bdevs_discovered": 1, 00:18:10.272 "num_base_bdevs_operational": 3, 00:18:10.272 "base_bdevs_list": [ 00:18:10.272 { 00:18:10.272 "name": "pt1", 00:18:10.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.272 "is_configured": true, 00:18:10.272 "data_offset": 2048, 00:18:10.272 "data_size": 63488 00:18:10.272 }, 00:18:10.272 { 00:18:10.272 "name": null, 00:18:10.272 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.272 "is_configured": false, 00:18:10.272 "data_offset": 0, 00:18:10.272 "data_size": 63488 00:18:10.272 }, 00:18:10.272 { 00:18:10.272 "name": null, 00:18:10.272 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.272 "is_configured": false, 00:18:10.272 "data_offset": 2048, 00:18:10.272 "data_size": 63488 00:18:10.272 } 00:18:10.272 ] 00:18:10.272 }' 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.272 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.530 [2024-11-04 14:44:09.631033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.530 [2024-11-04 14:44:09.631116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.530 [2024-11-04 14:44:09.631143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:10.530 [2024-11-04 14:44:09.631161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.530 [2024-11-04 14:44:09.631992] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.530 [2024-11-04 14:44:09.632029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.530 [2024-11-04 14:44:09.632131] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:10.530 [2024-11-04 14:44:09.632166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.530 pt2 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.530 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.530 [2024-11-04 14:44:09.643003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:10.530 [2024-11-04 14:44:09.643063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.530 [2024-11-04 14:44:09.643085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:10.530 [2024-11-04 14:44:09.643101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.530 [2024-11-04 14:44:09.643536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.530 [2024-11-04 14:44:09.643586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:10.530 [2024-11-04 14:44:09.643661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:10.530 [2024-11-04 14:44:09.643691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:10.530 [2024-11-04 14:44:09.643866] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:10.530 [2024-11-04 14:44:09.643892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:10.530 [2024-11-04 14:44:09.644209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:10.530 [2024-11-04 14:44:09.649155] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:10.530 [2024-11-04 14:44:09.649179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:10.530 [2024-11-04 14:44:09.649386] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.530 pt3 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.788 "name": "raid_bdev1", 00:18:10.788 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:10.788 "strip_size_kb": 64, 00:18:10.788 "state": "online", 00:18:10.788 "raid_level": "raid5f", 00:18:10.788 "superblock": true, 00:18:10.788 "num_base_bdevs": 3, 00:18:10.788 "num_base_bdevs_discovered": 3, 00:18:10.788 "num_base_bdevs_operational": 3, 00:18:10.788 "base_bdevs_list": [ 00:18:10.788 { 00:18:10.788 "name": "pt1", 00:18:10.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:10.788 "is_configured": true, 00:18:10.788 "data_offset": 2048, 00:18:10.788 "data_size": 63488 00:18:10.788 }, 00:18:10.788 { 00:18:10.788 "name": "pt2", 00:18:10.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:10.788 "is_configured": true, 00:18:10.788 "data_offset": 2048, 00:18:10.788 "data_size": 63488 00:18:10.788 }, 00:18:10.788 { 00:18:10.788 "name": "pt3", 00:18:10.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:10.788 "is_configured": true, 00:18:10.788 "data_offset": 2048, 00:18:10.788 "data_size": 63488 00:18:10.788 } 00:18:10.788 ] 00:18:10.788 }' 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.788 14:44:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:11.355 [2024-11-04 14:44:10.195347] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:11.355 "name": "raid_bdev1", 00:18:11.355 "aliases": [ 00:18:11.355 "b6667626-a255-44b8-a7c5-cdaa58d5512f" 00:18:11.355 ], 00:18:11.355 "product_name": "Raid Volume", 00:18:11.355 "block_size": 512, 00:18:11.355 "num_blocks": 126976, 00:18:11.355 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:11.355 "assigned_rate_limits": { 00:18:11.355 "rw_ios_per_sec": 0, 00:18:11.355 "rw_mbytes_per_sec": 0, 00:18:11.355 "r_mbytes_per_sec": 0, 00:18:11.355 "w_mbytes_per_sec": 0 00:18:11.355 }, 00:18:11.355 "claimed": false, 00:18:11.355 "zoned": false, 00:18:11.355 "supported_io_types": { 00:18:11.355 "read": true, 00:18:11.355 "write": true, 00:18:11.355 "unmap": false, 00:18:11.355 "flush": false, 00:18:11.355 "reset": true, 00:18:11.355 "nvme_admin": false, 00:18:11.355 "nvme_io": false, 00:18:11.355 "nvme_io_md": false, 00:18:11.355 "write_zeroes": true, 00:18:11.355 "zcopy": false, 00:18:11.355 "get_zone_info": false, 00:18:11.355 "zone_management": false, 00:18:11.355 "zone_append": false, 00:18:11.355 "compare": false, 00:18:11.355 "compare_and_write": false, 00:18:11.355 "abort": false, 00:18:11.355 "seek_hole": false, 00:18:11.355 "seek_data": false, 00:18:11.355 "copy": false, 00:18:11.355 "nvme_iov_md": false 00:18:11.355 }, 00:18:11.355 "driver_specific": { 00:18:11.355 "raid": { 00:18:11.355 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:11.355 "strip_size_kb": 64, 00:18:11.355 "state": "online", 00:18:11.355 "raid_level": "raid5f", 00:18:11.355 "superblock": true, 00:18:11.355 "num_base_bdevs": 3, 00:18:11.355 "num_base_bdevs_discovered": 3, 00:18:11.355 "num_base_bdevs_operational": 3, 00:18:11.355 "base_bdevs_list": [ 00:18:11.355 { 00:18:11.355 "name": "pt1", 00:18:11.355 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.355 "is_configured": true, 00:18:11.355 "data_offset": 2048, 00:18:11.355 "data_size": 63488 00:18:11.355 }, 00:18:11.355 { 00:18:11.355 "name": "pt2", 00:18:11.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.355 "is_configured": true, 00:18:11.355 "data_offset": 2048, 00:18:11.355 "data_size": 63488 00:18:11.355 }, 00:18:11.355 { 00:18:11.355 "name": "pt3", 00:18:11.355 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.355 "is_configured": true, 00:18:11.355 "data_offset": 2048, 00:18:11.355 "data_size": 63488 00:18:11.355 } 00:18:11.355 ] 00:18:11.355 } 00:18:11.355 } 00:18:11.355 }' 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:11.355 pt2 00:18:11.355 pt3' 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.355 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.356 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:11.614 [2024-11-04 14:44:10.515401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b6667626-a255-44b8-a7c5-cdaa58d5512f '!=' b6667626-a255-44b8-a7c5-cdaa58d5512f ']' 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.614 [2024-11-04 14:44:10.567227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.614 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.615 "name": "raid_bdev1", 00:18:11.615 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:11.615 "strip_size_kb": 64, 00:18:11.615 "state": "online", 00:18:11.615 "raid_level": "raid5f", 00:18:11.615 "superblock": true, 00:18:11.615 "num_base_bdevs": 3, 00:18:11.615 "num_base_bdevs_discovered": 2, 00:18:11.615 "num_base_bdevs_operational": 2, 00:18:11.615 "base_bdevs_list": [ 00:18:11.615 { 00:18:11.615 "name": null, 00:18:11.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.615 "is_configured": false, 00:18:11.615 "data_offset": 0, 00:18:11.615 "data_size": 63488 00:18:11.615 }, 00:18:11.615 { 00:18:11.615 "name": "pt2", 00:18:11.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.615 "is_configured": true, 00:18:11.615 "data_offset": 2048, 00:18:11.615 "data_size": 63488 00:18:11.615 }, 00:18:11.615 { 00:18:11.615 "name": "pt3", 00:18:11.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.615 "is_configured": true, 00:18:11.615 "data_offset": 2048, 00:18:11.615 "data_size": 63488 00:18:11.615 } 00:18:11.615 ] 00:18:11.615 }' 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.615 14:44:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.193 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:12.193 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.193 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.193 [2024-11-04 14:44:11.091338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:12.193 [2024-11-04 14:44:11.091513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:12.193 [2024-11-04 14:44:11.091626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:12.193 [2024-11-04 14:44:11.091701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:12.194 [2024-11-04 14:44:11.091723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.194 [2024-11-04 14:44:11.175362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:12.194 [2024-11-04 14:44:11.175626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.194 [2024-11-04 14:44:11.175663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:12.194 [2024-11-04 14:44:11.175680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.194 [2024-11-04 14:44:11.178592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.194 [2024-11-04 14:44:11.178799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:12.194 [2024-11-04 14:44:11.178915] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:12.194 [2024-11-04 14:44:11.178993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:12.194 pt2 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.194 "name": "raid_bdev1", 00:18:12.194 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:12.194 "strip_size_kb": 64, 00:18:12.194 "state": "configuring", 00:18:12.194 "raid_level": "raid5f", 00:18:12.194 "superblock": true, 00:18:12.194 "num_base_bdevs": 3, 00:18:12.194 "num_base_bdevs_discovered": 1, 00:18:12.194 "num_base_bdevs_operational": 2, 00:18:12.194 "base_bdevs_list": [ 00:18:12.194 { 00:18:12.194 "name": null, 00:18:12.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.194 "is_configured": false, 00:18:12.194 "data_offset": 2048, 00:18:12.194 "data_size": 63488 00:18:12.194 }, 00:18:12.194 { 00:18:12.194 "name": "pt2", 00:18:12.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.194 "is_configured": true, 00:18:12.194 "data_offset": 2048, 00:18:12.194 "data_size": 63488 00:18:12.194 }, 00:18:12.194 { 00:18:12.194 "name": null, 00:18:12.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.194 "is_configured": false, 00:18:12.194 "data_offset": 2048, 00:18:12.194 "data_size": 63488 00:18:12.194 } 00:18:12.194 ] 00:18:12.194 }' 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.194 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.761 [2024-11-04 14:44:11.687505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:12.761 [2024-11-04 14:44:11.687602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.761 [2024-11-04 14:44:11.687645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:12.761 [2024-11-04 14:44:11.687668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.761 [2024-11-04 14:44:11.688399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.761 [2024-11-04 14:44:11.688467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:12.761 [2024-11-04 14:44:11.688588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:12.761 [2024-11-04 14:44:11.688643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:12.761 [2024-11-04 14:44:11.688838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:12.761 [2024-11-04 14:44:11.688865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:12.761 [2024-11-04 14:44:11.689271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:12.761 [2024-11-04 14:44:11.695550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:12.761 [2024-11-04 14:44:11.695583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:12.761 pt3 00:18:12.761 [2024-11-04 14:44:11.696071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.761 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.761 "name": "raid_bdev1", 00:18:12.761 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:12.761 "strip_size_kb": 64, 00:18:12.761 "state": "online", 00:18:12.761 "raid_level": "raid5f", 00:18:12.761 "superblock": true, 00:18:12.761 "num_base_bdevs": 3, 00:18:12.761 "num_base_bdevs_discovered": 2, 00:18:12.761 "num_base_bdevs_operational": 2, 00:18:12.761 "base_bdevs_list": [ 00:18:12.761 { 00:18:12.761 "name": null, 00:18:12.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.761 "is_configured": false, 00:18:12.761 "data_offset": 2048, 00:18:12.761 "data_size": 63488 00:18:12.761 }, 00:18:12.761 { 00:18:12.761 "name": "pt2", 00:18:12.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.761 "is_configured": true, 00:18:12.761 "data_offset": 2048, 00:18:12.761 "data_size": 63488 00:18:12.761 }, 00:18:12.761 { 00:18:12.761 "name": "pt3", 00:18:12.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.761 "is_configured": true, 00:18:12.761 "data_offset": 2048, 00:18:12.762 "data_size": 63488 00:18:12.762 } 00:18:12.762 ] 00:18:12.762 }' 00:18:12.762 14:44:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.762 14:44:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.329 [2024-11-04 14:44:12.179070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.329 [2024-11-04 14:44:12.179109] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:13.329 [2024-11-04 14:44:12.179201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.329 [2024-11-04 14:44:12.179285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.329 [2024-11-04 14:44:12.179308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.329 [2024-11-04 14:44:12.251116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.329 [2024-11-04 14:44:12.251191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.329 [2024-11-04 14:44:12.251222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:13.329 [2024-11-04 14:44:12.251237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.329 [2024-11-04 14:44:12.254181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.329 [2024-11-04 14:44:12.254229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.329 [2024-11-04 14:44:12.254361] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:13.329 [2024-11-04 14:44:12.254432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:13.329 [2024-11-04 14:44:12.254605] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:13.329 [2024-11-04 14:44:12.254630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:13.329 [2024-11-04 14:44:12.254656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:13.329 [2024-11-04 14:44:12.254728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:13.329 pt1 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.329 "name": "raid_bdev1", 00:18:13.329 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:13.329 "strip_size_kb": 64, 00:18:13.329 "state": "configuring", 00:18:13.329 "raid_level": "raid5f", 00:18:13.329 "superblock": true, 00:18:13.329 "num_base_bdevs": 3, 00:18:13.329 "num_base_bdevs_discovered": 1, 00:18:13.329 "num_base_bdevs_operational": 2, 00:18:13.329 "base_bdevs_list": [ 00:18:13.329 { 00:18:13.329 "name": null, 00:18:13.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.329 "is_configured": false, 00:18:13.329 "data_offset": 2048, 00:18:13.329 "data_size": 63488 00:18:13.329 }, 00:18:13.329 { 00:18:13.329 "name": "pt2", 00:18:13.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.329 "is_configured": true, 00:18:13.329 "data_offset": 2048, 00:18:13.329 "data_size": 63488 00:18:13.329 }, 00:18:13.329 { 00:18:13.329 "name": null, 00:18:13.329 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.329 "is_configured": false, 00:18:13.329 "data_offset": 2048, 00:18:13.329 "data_size": 63488 00:18:13.329 } 00:18:13.329 ] 00:18:13.329 }' 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.329 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.897 [2024-11-04 14:44:12.799283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:13.897 [2024-11-04 14:44:12.799513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.897 [2024-11-04 14:44:12.799561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:13.897 [2024-11-04 14:44:12.799578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.897 [2024-11-04 14:44:12.800183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.897 [2024-11-04 14:44:12.800210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:13.897 [2024-11-04 14:44:12.800328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:13.897 [2024-11-04 14:44:12.800374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:13.897 [2024-11-04 14:44:12.800529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:13.897 [2024-11-04 14:44:12.800545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:13.897 [2024-11-04 14:44:12.800856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:13.897 [2024-11-04 14:44:12.805787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:13.897 [2024-11-04 14:44:12.805819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:13.897 [2024-11-04 14:44:12.806152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.897 pt3 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.897 "name": "raid_bdev1", 00:18:13.897 "uuid": "b6667626-a255-44b8-a7c5-cdaa58d5512f", 00:18:13.897 "strip_size_kb": 64, 00:18:13.897 "state": "online", 00:18:13.897 "raid_level": "raid5f", 00:18:13.897 "superblock": true, 00:18:13.897 "num_base_bdevs": 3, 00:18:13.897 "num_base_bdevs_discovered": 2, 00:18:13.897 "num_base_bdevs_operational": 2, 00:18:13.897 "base_bdevs_list": [ 00:18:13.897 { 00:18:13.897 "name": null, 00:18:13.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.897 "is_configured": false, 00:18:13.897 "data_offset": 2048, 00:18:13.897 "data_size": 63488 00:18:13.897 }, 00:18:13.897 { 00:18:13.897 "name": "pt2", 00:18:13.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.897 "is_configured": true, 00:18:13.897 "data_offset": 2048, 00:18:13.897 "data_size": 63488 00:18:13.897 }, 00:18:13.897 { 00:18:13.897 "name": "pt3", 00:18:13.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.897 "is_configured": true, 00:18:13.897 "data_offset": 2048, 00:18:13.897 "data_size": 63488 00:18:13.897 } 00:18:13.897 ] 00:18:13.897 }' 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.897 14:44:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.464 [2024-11-04 14:44:13.384115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b6667626-a255-44b8-a7c5-cdaa58d5512f '!=' b6667626-a255-44b8-a7c5-cdaa58d5512f ']' 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81492 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81492 ']' 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81492 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81492 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:14.464 killing process with pid 81492 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81492' 00:18:14.464 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81492 00:18:14.464 [2024-11-04 14:44:13.474249] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:14.464 [2024-11-04 14:44:13.474363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.465 [2024-11-04 14:44:13.474441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.465 [2024-11-04 14:44:13.474461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:14.465 14:44:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81492 00:18:14.723 [2024-11-04 14:44:13.745049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.659 14:44:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:15.659 00:18:15.659 real 0m8.620s 00:18:15.659 user 0m14.107s 00:18:15.659 sys 0m1.230s 00:18:15.659 14:44:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:15.659 14:44:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.659 ************************************ 00:18:15.659 END TEST raid5f_superblock_test 00:18:15.659 ************************************ 00:18:15.918 14:44:14 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:15.918 14:44:14 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:18:15.918 14:44:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:15.918 14:44:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:15.918 14:44:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.918 ************************************ 00:18:15.918 START TEST raid5f_rebuild_test 00:18:15.918 ************************************ 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81945 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81945 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 81945 ']' 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:15.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:15.918 14:44:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.918 [2024-11-04 14:44:14.921175] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:18:15.918 [2024-11-04 14:44:14.921330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:15.918 Zero copy mechanism will not be used. 00:18:15.918 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81945 ] 00:18:16.177 [2024-11-04 14:44:15.097336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.177 [2024-11-04 14:44:15.227832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.435 [2024-11-04 14:44:15.435825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.435 [2024-11-04 14:44:15.435922] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.002 BaseBdev1_malloc 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.002 [2024-11-04 14:44:15.951936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:17.002 [2024-11-04 14:44:15.952031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.002 [2024-11-04 14:44:15.952066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:17.002 [2024-11-04 14:44:15.952086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.002 [2024-11-04 14:44:15.954885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.002 [2024-11-04 14:44:15.954982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:17.002 BaseBdev1 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.002 BaseBdev2_malloc 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.002 14:44:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.002 [2024-11-04 14:44:16.004742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:17.002 [2024-11-04 14:44:16.004834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.002 [2024-11-04 14:44:16.004863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:17.002 [2024-11-04 14:44:16.004885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.002 [2024-11-04 14:44:16.007648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.002 [2024-11-04 14:44:16.007716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:17.002 BaseBdev2 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.002 BaseBdev3_malloc 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.002 [2024-11-04 14:44:16.071381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:17.002 [2024-11-04 14:44:16.071490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.002 [2024-11-04 14:44:16.071523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:17.002 [2024-11-04 14:44:16.071543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.002 [2024-11-04 14:44:16.074353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.002 [2024-11-04 14:44:16.074409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:17.002 BaseBdev3 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.002 spare_malloc 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.002 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.261 spare_delay 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.261 [2024-11-04 14:44:16.132215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:17.261 [2024-11-04 14:44:16.132282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.261 [2024-11-04 14:44:16.132309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:17.261 [2024-11-04 14:44:16.132328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.261 [2024-11-04 14:44:16.135170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.261 [2024-11-04 14:44:16.135230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:17.261 spare 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.261 [2024-11-04 14:44:16.140294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:17.261 [2024-11-04 14:44:16.142718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:17.261 [2024-11-04 14:44:16.142816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:17.261 [2024-11-04 14:44:16.142958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:17.261 [2024-11-04 14:44:16.142978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:17.261 [2024-11-04 14:44:16.143315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:17.261 [2024-11-04 14:44:16.148527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:17.261 [2024-11-04 14:44:16.148563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:17.261 [2024-11-04 14:44:16.148809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.261 "name": "raid_bdev1", 00:18:17.261 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:17.261 "strip_size_kb": 64, 00:18:17.261 "state": "online", 00:18:17.261 "raid_level": "raid5f", 00:18:17.261 "superblock": false, 00:18:17.261 "num_base_bdevs": 3, 00:18:17.261 "num_base_bdevs_discovered": 3, 00:18:17.261 "num_base_bdevs_operational": 3, 00:18:17.261 "base_bdevs_list": [ 00:18:17.261 { 00:18:17.261 "name": "BaseBdev1", 00:18:17.261 "uuid": "a8e05335-09c9-582c-b8b0-469696424a1e", 00:18:17.261 "is_configured": true, 00:18:17.261 "data_offset": 0, 00:18:17.261 "data_size": 65536 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "name": "BaseBdev2", 00:18:17.261 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:17.261 "is_configured": true, 00:18:17.261 "data_offset": 0, 00:18:17.261 "data_size": 65536 00:18:17.261 }, 00:18:17.261 { 00:18:17.261 "name": "BaseBdev3", 00:18:17.261 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:17.261 "is_configured": true, 00:18:17.261 "data_offset": 0, 00:18:17.261 "data_size": 65536 00:18:17.261 } 00:18:17.261 ] 00:18:17.261 }' 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.261 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:17.828 [2024-11-04 14:44:16.646891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:17.828 14:44:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:18.086 [2024-11-04 14:44:17.042836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:18.086 /dev/nbd0 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:18.086 1+0 records in 00:18:18.086 1+0 records out 00:18:18.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268007 s, 15.3 MB/s 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:18.086 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:18:18.652 512+0 records in 00:18:18.652 512+0 records out 00:18:18.652 67108864 bytes (67 MB, 64 MiB) copied, 0.513693 s, 131 MB/s 00:18:18.652 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:18.652 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:18.652 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:18.652 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:18.652 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:18.652 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:18.652 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:18.911 [2024-11-04 14:44:17.929665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.911 [2024-11-04 14:44:17.943626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:18.911 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.912 14:44:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.912 14:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.912 "name": "raid_bdev1", 00:18:18.912 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:18.912 "strip_size_kb": 64, 00:18:18.912 "state": "online", 00:18:18.912 "raid_level": "raid5f", 00:18:18.912 "superblock": false, 00:18:18.912 "num_base_bdevs": 3, 00:18:18.912 "num_base_bdevs_discovered": 2, 00:18:18.912 "num_base_bdevs_operational": 2, 00:18:18.912 "base_bdevs_list": [ 00:18:18.912 { 00:18:18.912 "name": null, 00:18:18.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.912 "is_configured": false, 00:18:18.912 "data_offset": 0, 00:18:18.912 "data_size": 65536 00:18:18.912 }, 00:18:18.912 { 00:18:18.912 "name": "BaseBdev2", 00:18:18.912 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:18.912 "is_configured": true, 00:18:18.912 "data_offset": 0, 00:18:18.912 "data_size": 65536 00:18:18.912 }, 00:18:18.912 { 00:18:18.912 "name": "BaseBdev3", 00:18:18.912 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:18.912 "is_configured": true, 00:18:18.912 "data_offset": 0, 00:18:18.912 "data_size": 65536 00:18:18.912 } 00:18:18.912 ] 00:18:18.912 }' 00:18:18.912 14:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.912 14:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.496 14:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:19.496 14:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.496 14:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.496 [2024-11-04 14:44:18.471773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.496 [2024-11-04 14:44:18.487117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:18:19.496 14:44:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.496 14:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:19.496 [2024-11-04 14:44:18.494663] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.437 "name": "raid_bdev1", 00:18:20.437 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:20.437 "strip_size_kb": 64, 00:18:20.437 "state": "online", 00:18:20.437 "raid_level": "raid5f", 00:18:20.437 "superblock": false, 00:18:20.437 "num_base_bdevs": 3, 00:18:20.437 "num_base_bdevs_discovered": 3, 00:18:20.437 "num_base_bdevs_operational": 3, 00:18:20.437 "process": { 00:18:20.437 "type": "rebuild", 00:18:20.437 "target": "spare", 00:18:20.437 "progress": { 00:18:20.437 "blocks": 18432, 00:18:20.437 "percent": 14 00:18:20.437 } 00:18:20.437 }, 00:18:20.437 "base_bdevs_list": [ 00:18:20.437 { 00:18:20.437 "name": "spare", 00:18:20.437 "uuid": "f7679c69-bf93-5a6f-b773-b40cdd221a1d", 00:18:20.437 "is_configured": true, 00:18:20.437 "data_offset": 0, 00:18:20.437 "data_size": 65536 00:18:20.437 }, 00:18:20.437 { 00:18:20.437 "name": "BaseBdev2", 00:18:20.437 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:20.437 "is_configured": true, 00:18:20.437 "data_offset": 0, 00:18:20.437 "data_size": 65536 00:18:20.437 }, 00:18:20.437 { 00:18:20.437 "name": "BaseBdev3", 00:18:20.437 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:20.437 "is_configured": true, 00:18:20.437 "data_offset": 0, 00:18:20.437 "data_size": 65536 00:18:20.437 } 00:18:20.437 ] 00:18:20.437 }' 00:18:20.437 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.694 [2024-11-04 14:44:19.656121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.694 [2024-11-04 14:44:19.708470] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:20.694 [2024-11-04 14:44:19.708728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.694 [2024-11-04 14:44:19.708969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:20.694 [2024-11-04 14:44:19.709092] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.694 "name": "raid_bdev1", 00:18:20.694 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:20.694 "strip_size_kb": 64, 00:18:20.694 "state": "online", 00:18:20.694 "raid_level": "raid5f", 00:18:20.694 "superblock": false, 00:18:20.694 "num_base_bdevs": 3, 00:18:20.694 "num_base_bdevs_discovered": 2, 00:18:20.694 "num_base_bdevs_operational": 2, 00:18:20.694 "base_bdevs_list": [ 00:18:20.694 { 00:18:20.694 "name": null, 00:18:20.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.694 "is_configured": false, 00:18:20.694 "data_offset": 0, 00:18:20.694 "data_size": 65536 00:18:20.694 }, 00:18:20.694 { 00:18:20.694 "name": "BaseBdev2", 00:18:20.694 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:20.694 "is_configured": true, 00:18:20.694 "data_offset": 0, 00:18:20.694 "data_size": 65536 00:18:20.694 }, 00:18:20.694 { 00:18:20.694 "name": "BaseBdev3", 00:18:20.694 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:20.694 "is_configured": true, 00:18:20.694 "data_offset": 0, 00:18:20.694 "data_size": 65536 00:18:20.694 } 00:18:20.694 ] 00:18:20.694 }' 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.694 14:44:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.260 "name": "raid_bdev1", 00:18:21.260 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:21.260 "strip_size_kb": 64, 00:18:21.260 "state": "online", 00:18:21.260 "raid_level": "raid5f", 00:18:21.260 "superblock": false, 00:18:21.260 "num_base_bdevs": 3, 00:18:21.260 "num_base_bdevs_discovered": 2, 00:18:21.260 "num_base_bdevs_operational": 2, 00:18:21.260 "base_bdevs_list": [ 00:18:21.260 { 00:18:21.260 "name": null, 00:18:21.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.260 "is_configured": false, 00:18:21.260 "data_offset": 0, 00:18:21.260 "data_size": 65536 00:18:21.260 }, 00:18:21.260 { 00:18:21.260 "name": "BaseBdev2", 00:18:21.260 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:21.260 "is_configured": true, 00:18:21.260 "data_offset": 0, 00:18:21.260 "data_size": 65536 00:18:21.260 }, 00:18:21.260 { 00:18:21.260 "name": "BaseBdev3", 00:18:21.260 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:21.260 "is_configured": true, 00:18:21.260 "data_offset": 0, 00:18:21.260 "data_size": 65536 00:18:21.260 } 00:18:21.260 ] 00:18:21.260 }' 00:18:21.260 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.518 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:21.518 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.518 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:21.518 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:21.518 14:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.518 14:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.518 [2024-11-04 14:44:20.436445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:21.518 [2024-11-04 14:44:20.451182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:21.518 14:44:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.518 14:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:21.518 [2024-11-04 14:44:20.458534] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.453 "name": "raid_bdev1", 00:18:22.453 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:22.453 "strip_size_kb": 64, 00:18:22.453 "state": "online", 00:18:22.453 "raid_level": "raid5f", 00:18:22.453 "superblock": false, 00:18:22.453 "num_base_bdevs": 3, 00:18:22.453 "num_base_bdevs_discovered": 3, 00:18:22.453 "num_base_bdevs_operational": 3, 00:18:22.453 "process": { 00:18:22.453 "type": "rebuild", 00:18:22.453 "target": "spare", 00:18:22.453 "progress": { 00:18:22.453 "blocks": 18432, 00:18:22.453 "percent": 14 00:18:22.453 } 00:18:22.453 }, 00:18:22.453 "base_bdevs_list": [ 00:18:22.453 { 00:18:22.453 "name": "spare", 00:18:22.453 "uuid": "f7679c69-bf93-5a6f-b773-b40cdd221a1d", 00:18:22.453 "is_configured": true, 00:18:22.453 "data_offset": 0, 00:18:22.453 "data_size": 65536 00:18:22.453 }, 00:18:22.453 { 00:18:22.453 "name": "BaseBdev2", 00:18:22.453 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:22.453 "is_configured": true, 00:18:22.453 "data_offset": 0, 00:18:22.453 "data_size": 65536 00:18:22.453 }, 00:18:22.453 { 00:18:22.453 "name": "BaseBdev3", 00:18:22.453 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:22.453 "is_configured": true, 00:18:22.453 "data_offset": 0, 00:18:22.453 "data_size": 65536 00:18:22.453 } 00:18:22.453 ] 00:18:22.453 }' 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.453 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=594 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.712 "name": "raid_bdev1", 00:18:22.712 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:22.712 "strip_size_kb": 64, 00:18:22.712 "state": "online", 00:18:22.712 "raid_level": "raid5f", 00:18:22.712 "superblock": false, 00:18:22.712 "num_base_bdevs": 3, 00:18:22.712 "num_base_bdevs_discovered": 3, 00:18:22.712 "num_base_bdevs_operational": 3, 00:18:22.712 "process": { 00:18:22.712 "type": "rebuild", 00:18:22.712 "target": "spare", 00:18:22.712 "progress": { 00:18:22.712 "blocks": 22528, 00:18:22.712 "percent": 17 00:18:22.712 } 00:18:22.712 }, 00:18:22.712 "base_bdevs_list": [ 00:18:22.712 { 00:18:22.712 "name": "spare", 00:18:22.712 "uuid": "f7679c69-bf93-5a6f-b773-b40cdd221a1d", 00:18:22.712 "is_configured": true, 00:18:22.712 "data_offset": 0, 00:18:22.712 "data_size": 65536 00:18:22.712 }, 00:18:22.712 { 00:18:22.712 "name": "BaseBdev2", 00:18:22.712 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:22.712 "is_configured": true, 00:18:22.712 "data_offset": 0, 00:18:22.712 "data_size": 65536 00:18:22.712 }, 00:18:22.712 { 00:18:22.712 "name": "BaseBdev3", 00:18:22.712 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:22.712 "is_configured": true, 00:18:22.712 "data_offset": 0, 00:18:22.712 "data_size": 65536 00:18:22.712 } 00:18:22.712 ] 00:18:22.712 }' 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.712 14:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.086 "name": "raid_bdev1", 00:18:24.086 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:24.086 "strip_size_kb": 64, 00:18:24.086 "state": "online", 00:18:24.086 "raid_level": "raid5f", 00:18:24.086 "superblock": false, 00:18:24.086 "num_base_bdevs": 3, 00:18:24.086 "num_base_bdevs_discovered": 3, 00:18:24.086 "num_base_bdevs_operational": 3, 00:18:24.086 "process": { 00:18:24.086 "type": "rebuild", 00:18:24.086 "target": "spare", 00:18:24.086 "progress": { 00:18:24.086 "blocks": 47104, 00:18:24.086 "percent": 35 00:18:24.086 } 00:18:24.086 }, 00:18:24.086 "base_bdevs_list": [ 00:18:24.086 { 00:18:24.086 "name": "spare", 00:18:24.086 "uuid": "f7679c69-bf93-5a6f-b773-b40cdd221a1d", 00:18:24.086 "is_configured": true, 00:18:24.086 "data_offset": 0, 00:18:24.086 "data_size": 65536 00:18:24.086 }, 00:18:24.086 { 00:18:24.086 "name": "BaseBdev2", 00:18:24.086 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:24.086 "is_configured": true, 00:18:24.086 "data_offset": 0, 00:18:24.086 "data_size": 65536 00:18:24.086 }, 00:18:24.086 { 00:18:24.086 "name": "BaseBdev3", 00:18:24.086 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:24.086 "is_configured": true, 00:18:24.086 "data_offset": 0, 00:18:24.086 "data_size": 65536 00:18:24.086 } 00:18:24.086 ] 00:18:24.086 }' 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.086 14:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:25.022 14:44:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.022 14:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.022 "name": "raid_bdev1", 00:18:25.022 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:25.022 "strip_size_kb": 64, 00:18:25.022 "state": "online", 00:18:25.022 "raid_level": "raid5f", 00:18:25.022 "superblock": false, 00:18:25.022 "num_base_bdevs": 3, 00:18:25.022 "num_base_bdevs_discovered": 3, 00:18:25.022 "num_base_bdevs_operational": 3, 00:18:25.022 "process": { 00:18:25.022 "type": "rebuild", 00:18:25.022 "target": "spare", 00:18:25.022 "progress": { 00:18:25.022 "blocks": 69632, 00:18:25.022 "percent": 53 00:18:25.022 } 00:18:25.022 }, 00:18:25.022 "base_bdevs_list": [ 00:18:25.022 { 00:18:25.022 "name": "spare", 00:18:25.022 "uuid": "f7679c69-bf93-5a6f-b773-b40cdd221a1d", 00:18:25.022 "is_configured": true, 00:18:25.022 "data_offset": 0, 00:18:25.022 "data_size": 65536 00:18:25.022 }, 00:18:25.022 { 00:18:25.022 "name": "BaseBdev2", 00:18:25.022 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:25.022 "is_configured": true, 00:18:25.022 "data_offset": 0, 00:18:25.022 "data_size": 65536 00:18:25.022 }, 00:18:25.022 { 00:18:25.022 "name": "BaseBdev3", 00:18:25.022 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:25.022 "is_configured": true, 00:18:25.022 "data_offset": 0, 00:18:25.022 "data_size": 65536 00:18:25.022 } 00:18:25.022 ] 00:18:25.022 }' 00:18:25.022 14:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.022 14:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.022 14:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.022 14:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.022 14:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:26.017 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:26.017 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:26.017 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.017 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:26.017 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:26.017 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.017 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.017 14:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.017 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.017 14:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.275 14:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.275 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.275 "name": "raid_bdev1", 00:18:26.275 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:26.275 "strip_size_kb": 64, 00:18:26.275 "state": "online", 00:18:26.275 "raid_level": "raid5f", 00:18:26.275 "superblock": false, 00:18:26.275 "num_base_bdevs": 3, 00:18:26.275 "num_base_bdevs_discovered": 3, 00:18:26.275 "num_base_bdevs_operational": 3, 00:18:26.275 "process": { 00:18:26.275 "type": "rebuild", 00:18:26.275 "target": "spare", 00:18:26.275 "progress": { 00:18:26.275 "blocks": 94208, 00:18:26.275 "percent": 71 00:18:26.275 } 00:18:26.275 }, 00:18:26.275 "base_bdevs_list": [ 00:18:26.275 { 00:18:26.275 "name": "spare", 00:18:26.275 "uuid": "f7679c69-bf93-5a6f-b773-b40cdd221a1d", 00:18:26.275 "is_configured": true, 00:18:26.275 "data_offset": 0, 00:18:26.275 "data_size": 65536 00:18:26.275 }, 00:18:26.275 { 00:18:26.275 "name": "BaseBdev2", 00:18:26.275 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:26.275 "is_configured": true, 00:18:26.275 "data_offset": 0, 00:18:26.275 "data_size": 65536 00:18:26.275 }, 00:18:26.275 { 00:18:26.275 "name": "BaseBdev3", 00:18:26.275 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:26.275 "is_configured": true, 00:18:26.275 "data_offset": 0, 00:18:26.275 "data_size": 65536 00:18:26.275 } 00:18:26.275 ] 00:18:26.275 }' 00:18:26.275 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.275 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:26.275 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.275 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.275 14:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.210 14:44:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.468 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.468 "name": "raid_bdev1", 00:18:27.468 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:27.468 "strip_size_kb": 64, 00:18:27.468 "state": "online", 00:18:27.468 "raid_level": "raid5f", 00:18:27.468 "superblock": false, 00:18:27.468 "num_base_bdevs": 3, 00:18:27.468 "num_base_bdevs_discovered": 3, 00:18:27.468 "num_base_bdevs_operational": 3, 00:18:27.468 "process": { 00:18:27.468 "type": "rebuild", 00:18:27.468 "target": "spare", 00:18:27.468 "progress": { 00:18:27.468 "blocks": 116736, 00:18:27.468 "percent": 89 00:18:27.468 } 00:18:27.468 }, 00:18:27.468 "base_bdevs_list": [ 00:18:27.468 { 00:18:27.468 "name": "spare", 00:18:27.468 "uuid": "f7679c69-bf93-5a6f-b773-b40cdd221a1d", 00:18:27.468 "is_configured": true, 00:18:27.468 "data_offset": 0, 00:18:27.468 "data_size": 65536 00:18:27.468 }, 00:18:27.468 { 00:18:27.468 "name": "BaseBdev2", 00:18:27.468 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:27.468 "is_configured": true, 00:18:27.468 "data_offset": 0, 00:18:27.468 "data_size": 65536 00:18:27.468 }, 00:18:27.468 { 00:18:27.468 "name": "BaseBdev3", 00:18:27.468 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:27.468 "is_configured": true, 00:18:27.468 "data_offset": 0, 00:18:27.468 "data_size": 65536 00:18:27.468 } 00:18:27.468 ] 00:18:27.468 }' 00:18:27.469 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.469 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.469 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.469 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.469 14:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:28.033 [2024-11-04 14:44:26.934180] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:28.033 [2024-11-04 14:44:26.934303] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:28.033 [2024-11-04 14:44:26.934370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.599 "name": "raid_bdev1", 00:18:28.599 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:28.599 "strip_size_kb": 64, 00:18:28.599 "state": "online", 00:18:28.599 "raid_level": "raid5f", 00:18:28.599 "superblock": false, 00:18:28.599 "num_base_bdevs": 3, 00:18:28.599 "num_base_bdevs_discovered": 3, 00:18:28.599 "num_base_bdevs_operational": 3, 00:18:28.599 "base_bdevs_list": [ 00:18:28.599 { 00:18:28.599 "name": "spare", 00:18:28.599 "uuid": "f7679c69-bf93-5a6f-b773-b40cdd221a1d", 00:18:28.599 "is_configured": true, 00:18:28.599 "data_offset": 0, 00:18:28.599 "data_size": 65536 00:18:28.599 }, 00:18:28.599 { 00:18:28.599 "name": "BaseBdev2", 00:18:28.599 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:28.599 "is_configured": true, 00:18:28.599 "data_offset": 0, 00:18:28.599 "data_size": 65536 00:18:28.599 }, 00:18:28.599 { 00:18:28.599 "name": "BaseBdev3", 00:18:28.599 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:28.599 "is_configured": true, 00:18:28.599 "data_offset": 0, 00:18:28.599 "data_size": 65536 00:18:28.599 } 00:18:28.599 ] 00:18:28.599 }' 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.599 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.600 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.600 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.600 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.600 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.600 14:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.600 14:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.600 14:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.600 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.600 "name": "raid_bdev1", 00:18:28.600 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:28.600 "strip_size_kb": 64, 00:18:28.600 "state": "online", 00:18:28.600 "raid_level": "raid5f", 00:18:28.600 "superblock": false, 00:18:28.600 "num_base_bdevs": 3, 00:18:28.600 "num_base_bdevs_discovered": 3, 00:18:28.600 "num_base_bdevs_operational": 3, 00:18:28.600 "base_bdevs_list": [ 00:18:28.600 { 00:18:28.600 "name": "spare", 00:18:28.600 "uuid": "f7679c69-bf93-5a6f-b773-b40cdd221a1d", 00:18:28.600 "is_configured": true, 00:18:28.600 "data_offset": 0, 00:18:28.600 "data_size": 65536 00:18:28.600 }, 00:18:28.600 { 00:18:28.600 "name": "BaseBdev2", 00:18:28.600 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:28.600 "is_configured": true, 00:18:28.600 "data_offset": 0, 00:18:28.600 "data_size": 65536 00:18:28.600 }, 00:18:28.600 { 00:18:28.600 "name": "BaseBdev3", 00:18:28.600 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:28.600 "is_configured": true, 00:18:28.600 "data_offset": 0, 00:18:28.600 "data_size": 65536 00:18:28.600 } 00:18:28.600 ] 00:18:28.600 }' 00:18:28.600 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.859 "name": "raid_bdev1", 00:18:28.859 "uuid": "17885188-5d80-4e50-ac3d-a97bb21238c6", 00:18:28.859 "strip_size_kb": 64, 00:18:28.859 "state": "online", 00:18:28.859 "raid_level": "raid5f", 00:18:28.859 "superblock": false, 00:18:28.859 "num_base_bdevs": 3, 00:18:28.859 "num_base_bdevs_discovered": 3, 00:18:28.859 "num_base_bdevs_operational": 3, 00:18:28.859 "base_bdevs_list": [ 00:18:28.859 { 00:18:28.859 "name": "spare", 00:18:28.859 "uuid": "f7679c69-bf93-5a6f-b773-b40cdd221a1d", 00:18:28.859 "is_configured": true, 00:18:28.859 "data_offset": 0, 00:18:28.859 "data_size": 65536 00:18:28.859 }, 00:18:28.859 { 00:18:28.859 "name": "BaseBdev2", 00:18:28.859 "uuid": "ea1cf2d9-695d-57d4-8504-bcd3d5b28197", 00:18:28.859 "is_configured": true, 00:18:28.859 "data_offset": 0, 00:18:28.859 "data_size": 65536 00:18:28.859 }, 00:18:28.859 { 00:18:28.859 "name": "BaseBdev3", 00:18:28.859 "uuid": "152133f2-46a2-5c44-9ce8-c35036938501", 00:18:28.859 "is_configured": true, 00:18:28.859 "data_offset": 0, 00:18:28.859 "data_size": 65536 00:18:28.859 } 00:18:28.859 ] 00:18:28.859 }' 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.859 14:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.426 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:29.426 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.426 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.427 [2024-11-04 14:44:28.325853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.427 [2024-11-04 14:44:28.325889] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.427 [2024-11-04 14:44:28.326024] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.427 [2024-11-04 14:44:28.326131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.427 [2024-11-04 14:44:28.326158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:29.427 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:29.685 /dev/nbd0 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:29.685 1+0 records in 00:18:29.685 1+0 records out 00:18:29.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294314 s, 13.9 MB/s 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:29.685 14:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:29.944 /dev/nbd1 00:18:29.944 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:29.944 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:29.944 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:29.944 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:29.944 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:29.945 1+0 records in 00:18:29.945 1+0 records out 00:18:29.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461067 s, 8.9 MB/s 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:29.945 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:30.205 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:30.205 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:30.205 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:30.205 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:30.205 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:30.205 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.205 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:30.465 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:30.465 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:30.465 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:30.465 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:30.465 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:30.465 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:30.724 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:30.724 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:30.724 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.724 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81945 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 81945 ']' 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 81945 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81945 00:18:30.982 killing process with pid 81945 00:18:30.982 Received shutdown signal, test time was about 60.000000 seconds 00:18:30.982 00:18:30.982 Latency(us) 00:18:30.982 [2024-11-04T14:44:30.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.982 [2024-11-04T14:44:30.105Z] =================================================================================================================== 00:18:30.982 [2024-11-04T14:44:30.105Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81945' 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 81945 00:18:30.982 [2024-11-04 14:44:29.946883] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:30.982 14:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 81945 00:18:31.241 [2024-11-04 14:44:30.302911] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.620 14:44:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:32.620 00:18:32.620 real 0m16.506s 00:18:32.620 user 0m21.169s 00:18:32.620 sys 0m2.046s 00:18:32.620 ************************************ 00:18:32.620 END TEST raid5f_rebuild_test 00:18:32.620 ************************************ 00:18:32.620 14:44:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:32.620 14:44:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.620 14:44:31 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:18:32.620 14:44:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:32.620 14:44:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:32.620 14:44:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:32.620 ************************************ 00:18:32.620 START TEST raid5f_rebuild_test_sb 00:18:32.621 ************************************ 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82396 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82396 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82396 ']' 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:32.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:32.621 14:44:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.621 [2024-11-04 14:44:31.509618] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:18:32.621 [2024-11-04 14:44:31.509789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:32.621 Zero copy mechanism will not be used. 00:18:32.621 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82396 ] 00:18:32.621 [2024-11-04 14:44:31.691371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.880 [2024-11-04 14:44:31.827509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.141 [2024-11-04 14:44:32.033206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.141 [2024-11-04 14:44:32.033557] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.709 BaseBdev1_malloc 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.709 [2024-11-04 14:44:32.634748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:33.709 [2024-11-04 14:44:32.634891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.709 [2024-11-04 14:44:32.634958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:33.709 [2024-11-04 14:44:32.635009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.709 [2024-11-04 14:44:32.638156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.709 [2024-11-04 14:44:32.638213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:33.709 BaseBdev1 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.709 BaseBdev2_malloc 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.709 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.710 [2024-11-04 14:44:32.695410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:33.710 [2024-11-04 14:44:32.695533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.710 [2024-11-04 14:44:32.695561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:33.710 [2024-11-04 14:44:32.695581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.710 [2024-11-04 14:44:32.698393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.710 [2024-11-04 14:44:32.698479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:33.710 BaseBdev2 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.710 BaseBdev3_malloc 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.710 [2024-11-04 14:44:32.762250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:33.710 [2024-11-04 14:44:32.762325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.710 [2024-11-04 14:44:32.762358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:33.710 [2024-11-04 14:44:32.762377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.710 [2024-11-04 14:44:32.765226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.710 [2024-11-04 14:44:32.765280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:33.710 BaseBdev3 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.710 spare_malloc 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.710 spare_delay 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.710 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.710 [2024-11-04 14:44:32.826621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:33.710 [2024-11-04 14:44:32.826871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.710 [2024-11-04 14:44:32.826977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:33.710 [2024-11-04 14:44:32.827021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.969 [2024-11-04 14:44:32.830268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.969 [2024-11-04 14:44:32.830328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:33.969 spare 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.969 [2024-11-04 14:44:32.838734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:33.969 [2024-11-04 14:44:32.841255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:33.969 [2024-11-04 14:44:32.841479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.969 [2024-11-04 14:44:32.841733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:33.969 [2024-11-04 14:44:32.841755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:33.969 [2024-11-04 14:44:32.842099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:33.969 [2024-11-04 14:44:32.847330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:33.969 [2024-11-04 14:44:32.847362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:33.969 [2024-11-04 14:44:32.847634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.969 "name": "raid_bdev1", 00:18:33.969 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:33.969 "strip_size_kb": 64, 00:18:33.969 "state": "online", 00:18:33.969 "raid_level": "raid5f", 00:18:33.969 "superblock": true, 00:18:33.969 "num_base_bdevs": 3, 00:18:33.969 "num_base_bdevs_discovered": 3, 00:18:33.969 "num_base_bdevs_operational": 3, 00:18:33.969 "base_bdevs_list": [ 00:18:33.969 { 00:18:33.969 "name": "BaseBdev1", 00:18:33.969 "uuid": "d4dd0331-921a-534b-aa19-28326549a47f", 00:18:33.969 "is_configured": true, 00:18:33.969 "data_offset": 2048, 00:18:33.969 "data_size": 63488 00:18:33.969 }, 00:18:33.969 { 00:18:33.969 "name": "BaseBdev2", 00:18:33.969 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:33.969 "is_configured": true, 00:18:33.969 "data_offset": 2048, 00:18:33.969 "data_size": 63488 00:18:33.969 }, 00:18:33.969 { 00:18:33.969 "name": "BaseBdev3", 00:18:33.969 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:33.969 "is_configured": true, 00:18:33.969 "data_offset": 2048, 00:18:33.969 "data_size": 63488 00:18:33.969 } 00:18:33.969 ] 00:18:33.969 }' 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.969 14:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.536 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:34.536 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.536 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.536 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:34.537 [2024-11-04 14:44:33.369683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:34.537 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:34.795 [2024-11-04 14:44:33.761624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:34.795 /dev/nbd0 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:34.795 1+0 records in 00:18:34.795 1+0 records out 00:18:34.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355961 s, 11.5 MB/s 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:18:34.795 14:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:18:35.214 496+0 records in 00:18:35.214 496+0 records out 00:18:35.214 65011712 bytes (65 MB, 62 MiB) copied, 0.465475 s, 140 MB/s 00:18:35.214 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:35.214 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:35.214 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:35.214 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:35.214 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:35.214 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:35.214 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:35.474 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:35.474 [2024-11-04 14:44:34.588589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:35.474 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:35.475 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:35.475 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.475 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.475 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.734 [2024-11-04 14:44:34.598473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.734 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.734 "name": "raid_bdev1", 00:18:35.734 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:35.734 "strip_size_kb": 64, 00:18:35.734 "state": "online", 00:18:35.734 "raid_level": "raid5f", 00:18:35.734 "superblock": true, 00:18:35.734 "num_base_bdevs": 3, 00:18:35.734 "num_base_bdevs_discovered": 2, 00:18:35.734 "num_base_bdevs_operational": 2, 00:18:35.734 "base_bdevs_list": [ 00:18:35.734 { 00:18:35.734 "name": null, 00:18:35.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.734 "is_configured": false, 00:18:35.734 "data_offset": 0, 00:18:35.734 "data_size": 63488 00:18:35.734 }, 00:18:35.734 { 00:18:35.734 "name": "BaseBdev2", 00:18:35.734 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:35.734 "is_configured": true, 00:18:35.734 "data_offset": 2048, 00:18:35.734 "data_size": 63488 00:18:35.734 }, 00:18:35.734 { 00:18:35.734 "name": "BaseBdev3", 00:18:35.734 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:35.734 "is_configured": true, 00:18:35.735 "data_offset": 2048, 00:18:35.735 "data_size": 63488 00:18:35.735 } 00:18:35.735 ] 00:18:35.735 }' 00:18:35.735 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.735 14:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.994 14:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:35.994 14:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.994 14:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.994 [2024-11-04 14:44:35.094624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.994 [2024-11-04 14:44:35.110579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:18:35.994 14:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.994 14:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:36.253 [2024-11-04 14:44:35.118296] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:37.189 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.189 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.189 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.189 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.189 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.189 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.189 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.190 "name": "raid_bdev1", 00:18:37.190 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:37.190 "strip_size_kb": 64, 00:18:37.190 "state": "online", 00:18:37.190 "raid_level": "raid5f", 00:18:37.190 "superblock": true, 00:18:37.190 "num_base_bdevs": 3, 00:18:37.190 "num_base_bdevs_discovered": 3, 00:18:37.190 "num_base_bdevs_operational": 3, 00:18:37.190 "process": { 00:18:37.190 "type": "rebuild", 00:18:37.190 "target": "spare", 00:18:37.190 "progress": { 00:18:37.190 "blocks": 18432, 00:18:37.190 "percent": 14 00:18:37.190 } 00:18:37.190 }, 00:18:37.190 "base_bdevs_list": [ 00:18:37.190 { 00:18:37.190 "name": "spare", 00:18:37.190 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:37.190 "is_configured": true, 00:18:37.190 "data_offset": 2048, 00:18:37.190 "data_size": 63488 00:18:37.190 }, 00:18:37.190 { 00:18:37.190 "name": "BaseBdev2", 00:18:37.190 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:37.190 "is_configured": true, 00:18:37.190 "data_offset": 2048, 00:18:37.190 "data_size": 63488 00:18:37.190 }, 00:18:37.190 { 00:18:37.190 "name": "BaseBdev3", 00:18:37.190 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:37.190 "is_configured": true, 00:18:37.190 "data_offset": 2048, 00:18:37.190 "data_size": 63488 00:18:37.190 } 00:18:37.190 ] 00:18:37.190 }' 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.190 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.190 [2024-11-04 14:44:36.280729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.448 [2024-11-04 14:44:36.333106] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:37.448 [2024-11-04 14:44:36.333204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.448 [2024-11-04 14:44:36.333234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:37.448 [2024-11-04 14:44:36.333247] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.448 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.448 "name": "raid_bdev1", 00:18:37.448 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:37.448 "strip_size_kb": 64, 00:18:37.449 "state": "online", 00:18:37.449 "raid_level": "raid5f", 00:18:37.449 "superblock": true, 00:18:37.449 "num_base_bdevs": 3, 00:18:37.449 "num_base_bdevs_discovered": 2, 00:18:37.449 "num_base_bdevs_operational": 2, 00:18:37.449 "base_bdevs_list": [ 00:18:37.449 { 00:18:37.449 "name": null, 00:18:37.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.449 "is_configured": false, 00:18:37.449 "data_offset": 0, 00:18:37.449 "data_size": 63488 00:18:37.449 }, 00:18:37.449 { 00:18:37.449 "name": "BaseBdev2", 00:18:37.449 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:37.449 "is_configured": true, 00:18:37.449 "data_offset": 2048, 00:18:37.449 "data_size": 63488 00:18:37.449 }, 00:18:37.449 { 00:18:37.449 "name": "BaseBdev3", 00:18:37.449 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:37.449 "is_configured": true, 00:18:37.449 "data_offset": 2048, 00:18:37.449 "data_size": 63488 00:18:37.449 } 00:18:37.449 ] 00:18:37.449 }' 00:18:37.449 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.449 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.042 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.042 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.042 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.042 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.042 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.042 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.043 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.043 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.043 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.043 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.043 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.043 "name": "raid_bdev1", 00:18:38.043 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:38.043 "strip_size_kb": 64, 00:18:38.043 "state": "online", 00:18:38.043 "raid_level": "raid5f", 00:18:38.043 "superblock": true, 00:18:38.043 "num_base_bdevs": 3, 00:18:38.043 "num_base_bdevs_discovered": 2, 00:18:38.043 "num_base_bdevs_operational": 2, 00:18:38.043 "base_bdevs_list": [ 00:18:38.043 { 00:18:38.043 "name": null, 00:18:38.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.043 "is_configured": false, 00:18:38.043 "data_offset": 0, 00:18:38.043 "data_size": 63488 00:18:38.043 }, 00:18:38.043 { 00:18:38.043 "name": "BaseBdev2", 00:18:38.043 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:38.043 "is_configured": true, 00:18:38.043 "data_offset": 2048, 00:18:38.043 "data_size": 63488 00:18:38.043 }, 00:18:38.043 { 00:18:38.043 "name": "BaseBdev3", 00:18:38.043 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:38.043 "is_configured": true, 00:18:38.043 "data_offset": 2048, 00:18:38.043 "data_size": 63488 00:18:38.043 } 00:18:38.043 ] 00:18:38.043 }' 00:18:38.043 14:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.043 14:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.043 14:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.043 14:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.043 14:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:38.043 14:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.043 14:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.043 [2024-11-04 14:44:37.069062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.043 [2024-11-04 14:44:37.084023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:18:38.043 14:44:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.043 14:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:38.043 [2024-11-04 14:44:37.091329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:38.979 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.979 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.979 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.979 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.979 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.979 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.979 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.979 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.979 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.238 "name": "raid_bdev1", 00:18:39.238 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:39.238 "strip_size_kb": 64, 00:18:39.238 "state": "online", 00:18:39.238 "raid_level": "raid5f", 00:18:39.238 "superblock": true, 00:18:39.238 "num_base_bdevs": 3, 00:18:39.238 "num_base_bdevs_discovered": 3, 00:18:39.238 "num_base_bdevs_operational": 3, 00:18:39.238 "process": { 00:18:39.238 "type": "rebuild", 00:18:39.238 "target": "spare", 00:18:39.238 "progress": { 00:18:39.238 "blocks": 18432, 00:18:39.238 "percent": 14 00:18:39.238 } 00:18:39.238 }, 00:18:39.238 "base_bdevs_list": [ 00:18:39.238 { 00:18:39.238 "name": "spare", 00:18:39.238 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:39.238 "is_configured": true, 00:18:39.238 "data_offset": 2048, 00:18:39.238 "data_size": 63488 00:18:39.238 }, 00:18:39.238 { 00:18:39.238 "name": "BaseBdev2", 00:18:39.238 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:39.238 "is_configured": true, 00:18:39.238 "data_offset": 2048, 00:18:39.238 "data_size": 63488 00:18:39.238 }, 00:18:39.238 { 00:18:39.238 "name": "BaseBdev3", 00:18:39.238 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:39.238 "is_configured": true, 00:18:39.238 "data_offset": 2048, 00:18:39.238 "data_size": 63488 00:18:39.238 } 00:18:39.238 ] 00:18:39.238 }' 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:39.238 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=611 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.238 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.238 "name": "raid_bdev1", 00:18:39.239 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:39.239 "strip_size_kb": 64, 00:18:39.239 "state": "online", 00:18:39.239 "raid_level": "raid5f", 00:18:39.239 "superblock": true, 00:18:39.239 "num_base_bdevs": 3, 00:18:39.239 "num_base_bdevs_discovered": 3, 00:18:39.239 "num_base_bdevs_operational": 3, 00:18:39.239 "process": { 00:18:39.239 "type": "rebuild", 00:18:39.239 "target": "spare", 00:18:39.239 "progress": { 00:18:39.239 "blocks": 22528, 00:18:39.239 "percent": 17 00:18:39.239 } 00:18:39.239 }, 00:18:39.239 "base_bdevs_list": [ 00:18:39.239 { 00:18:39.239 "name": "spare", 00:18:39.239 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:39.239 "is_configured": true, 00:18:39.239 "data_offset": 2048, 00:18:39.239 "data_size": 63488 00:18:39.239 }, 00:18:39.239 { 00:18:39.239 "name": "BaseBdev2", 00:18:39.239 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:39.239 "is_configured": true, 00:18:39.239 "data_offset": 2048, 00:18:39.239 "data_size": 63488 00:18:39.239 }, 00:18:39.239 { 00:18:39.239 "name": "BaseBdev3", 00:18:39.239 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:39.239 "is_configured": true, 00:18:39.239 "data_offset": 2048, 00:18:39.239 "data_size": 63488 00:18:39.239 } 00:18:39.239 ] 00:18:39.239 }' 00:18:39.239 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.239 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.239 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.498 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.498 14:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.434 "name": "raid_bdev1", 00:18:40.434 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:40.434 "strip_size_kb": 64, 00:18:40.434 "state": "online", 00:18:40.434 "raid_level": "raid5f", 00:18:40.434 "superblock": true, 00:18:40.434 "num_base_bdevs": 3, 00:18:40.434 "num_base_bdevs_discovered": 3, 00:18:40.434 "num_base_bdevs_operational": 3, 00:18:40.434 "process": { 00:18:40.434 "type": "rebuild", 00:18:40.434 "target": "spare", 00:18:40.434 "progress": { 00:18:40.434 "blocks": 47104, 00:18:40.434 "percent": 37 00:18:40.434 } 00:18:40.434 }, 00:18:40.434 "base_bdevs_list": [ 00:18:40.434 { 00:18:40.434 "name": "spare", 00:18:40.434 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:40.434 "is_configured": true, 00:18:40.434 "data_offset": 2048, 00:18:40.434 "data_size": 63488 00:18:40.434 }, 00:18:40.434 { 00:18:40.434 "name": "BaseBdev2", 00:18:40.434 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:40.434 "is_configured": true, 00:18:40.434 "data_offset": 2048, 00:18:40.434 "data_size": 63488 00:18:40.434 }, 00:18:40.434 { 00:18:40.434 "name": "BaseBdev3", 00:18:40.434 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:40.434 "is_configured": true, 00:18:40.434 "data_offset": 2048, 00:18:40.434 "data_size": 63488 00:18:40.434 } 00:18:40.434 ] 00:18:40.434 }' 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.434 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.693 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.693 14:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.630 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.630 "name": "raid_bdev1", 00:18:41.630 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:41.630 "strip_size_kb": 64, 00:18:41.630 "state": "online", 00:18:41.631 "raid_level": "raid5f", 00:18:41.631 "superblock": true, 00:18:41.631 "num_base_bdevs": 3, 00:18:41.631 "num_base_bdevs_discovered": 3, 00:18:41.631 "num_base_bdevs_operational": 3, 00:18:41.631 "process": { 00:18:41.631 "type": "rebuild", 00:18:41.631 "target": "spare", 00:18:41.631 "progress": { 00:18:41.631 "blocks": 69632, 00:18:41.631 "percent": 54 00:18:41.631 } 00:18:41.631 }, 00:18:41.631 "base_bdevs_list": [ 00:18:41.631 { 00:18:41.631 "name": "spare", 00:18:41.631 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:41.631 "is_configured": true, 00:18:41.631 "data_offset": 2048, 00:18:41.631 "data_size": 63488 00:18:41.631 }, 00:18:41.631 { 00:18:41.631 "name": "BaseBdev2", 00:18:41.631 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:41.631 "is_configured": true, 00:18:41.631 "data_offset": 2048, 00:18:41.631 "data_size": 63488 00:18:41.631 }, 00:18:41.631 { 00:18:41.631 "name": "BaseBdev3", 00:18:41.631 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:41.631 "is_configured": true, 00:18:41.631 "data_offset": 2048, 00:18:41.631 "data_size": 63488 00:18:41.631 } 00:18:41.631 ] 00:18:41.631 }' 00:18:41.631 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.631 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.631 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.631 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.631 14:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.007 "name": "raid_bdev1", 00:18:43.007 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:43.007 "strip_size_kb": 64, 00:18:43.007 "state": "online", 00:18:43.007 "raid_level": "raid5f", 00:18:43.007 "superblock": true, 00:18:43.007 "num_base_bdevs": 3, 00:18:43.007 "num_base_bdevs_discovered": 3, 00:18:43.007 "num_base_bdevs_operational": 3, 00:18:43.007 "process": { 00:18:43.007 "type": "rebuild", 00:18:43.007 "target": "spare", 00:18:43.007 "progress": { 00:18:43.007 "blocks": 92160, 00:18:43.007 "percent": 72 00:18:43.007 } 00:18:43.007 }, 00:18:43.007 "base_bdevs_list": [ 00:18:43.007 { 00:18:43.007 "name": "spare", 00:18:43.007 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:43.007 "is_configured": true, 00:18:43.007 "data_offset": 2048, 00:18:43.007 "data_size": 63488 00:18:43.007 }, 00:18:43.007 { 00:18:43.007 "name": "BaseBdev2", 00:18:43.007 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:43.007 "is_configured": true, 00:18:43.007 "data_offset": 2048, 00:18:43.007 "data_size": 63488 00:18:43.007 }, 00:18:43.007 { 00:18:43.007 "name": "BaseBdev3", 00:18:43.007 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:43.007 "is_configured": true, 00:18:43.007 "data_offset": 2048, 00:18:43.007 "data_size": 63488 00:18:43.007 } 00:18:43.007 ] 00:18:43.007 }' 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.007 14:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.943 "name": "raid_bdev1", 00:18:43.943 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:43.943 "strip_size_kb": 64, 00:18:43.943 "state": "online", 00:18:43.943 "raid_level": "raid5f", 00:18:43.943 "superblock": true, 00:18:43.943 "num_base_bdevs": 3, 00:18:43.943 "num_base_bdevs_discovered": 3, 00:18:43.943 "num_base_bdevs_operational": 3, 00:18:43.943 "process": { 00:18:43.943 "type": "rebuild", 00:18:43.943 "target": "spare", 00:18:43.943 "progress": { 00:18:43.943 "blocks": 116736, 00:18:43.943 "percent": 91 00:18:43.943 } 00:18:43.943 }, 00:18:43.943 "base_bdevs_list": [ 00:18:43.943 { 00:18:43.943 "name": "spare", 00:18:43.943 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:43.943 "is_configured": true, 00:18:43.943 "data_offset": 2048, 00:18:43.943 "data_size": 63488 00:18:43.943 }, 00:18:43.943 { 00:18:43.943 "name": "BaseBdev2", 00:18:43.943 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:43.943 "is_configured": true, 00:18:43.943 "data_offset": 2048, 00:18:43.943 "data_size": 63488 00:18:43.943 }, 00:18:43.943 { 00:18:43.943 "name": "BaseBdev3", 00:18:43.943 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:43.943 "is_configured": true, 00:18:43.943 "data_offset": 2048, 00:18:43.943 "data_size": 63488 00:18:43.943 } 00:18:43.943 ] 00:18:43.943 }' 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.943 14:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.943 14:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.943 14:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:44.510 [2024-11-04 14:44:43.365277] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:44.510 [2024-11-04 14:44:43.365415] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:44.510 [2024-11-04 14:44:43.365577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.078 "name": "raid_bdev1", 00:18:45.078 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:45.078 "strip_size_kb": 64, 00:18:45.078 "state": "online", 00:18:45.078 "raid_level": "raid5f", 00:18:45.078 "superblock": true, 00:18:45.078 "num_base_bdevs": 3, 00:18:45.078 "num_base_bdevs_discovered": 3, 00:18:45.078 "num_base_bdevs_operational": 3, 00:18:45.078 "base_bdevs_list": [ 00:18:45.078 { 00:18:45.078 "name": "spare", 00:18:45.078 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:45.078 "is_configured": true, 00:18:45.078 "data_offset": 2048, 00:18:45.078 "data_size": 63488 00:18:45.078 }, 00:18:45.078 { 00:18:45.078 "name": "BaseBdev2", 00:18:45.078 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:45.078 "is_configured": true, 00:18:45.078 "data_offset": 2048, 00:18:45.078 "data_size": 63488 00:18:45.078 }, 00:18:45.078 { 00:18:45.078 "name": "BaseBdev3", 00:18:45.078 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:45.078 "is_configured": true, 00:18:45.078 "data_offset": 2048, 00:18:45.078 "data_size": 63488 00:18:45.078 } 00:18:45.078 ] 00:18:45.078 }' 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.078 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:45.079 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.338 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.338 "name": "raid_bdev1", 00:18:45.338 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:45.338 "strip_size_kb": 64, 00:18:45.338 "state": "online", 00:18:45.338 "raid_level": "raid5f", 00:18:45.338 "superblock": true, 00:18:45.338 "num_base_bdevs": 3, 00:18:45.338 "num_base_bdevs_discovered": 3, 00:18:45.338 "num_base_bdevs_operational": 3, 00:18:45.338 "base_bdevs_list": [ 00:18:45.338 { 00:18:45.338 "name": "spare", 00:18:45.338 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:45.338 "is_configured": true, 00:18:45.338 "data_offset": 2048, 00:18:45.339 "data_size": 63488 00:18:45.339 }, 00:18:45.339 { 00:18:45.339 "name": "BaseBdev2", 00:18:45.339 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:45.339 "is_configured": true, 00:18:45.339 "data_offset": 2048, 00:18:45.339 "data_size": 63488 00:18:45.339 }, 00:18:45.339 { 00:18:45.339 "name": "BaseBdev3", 00:18:45.339 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:45.339 "is_configured": true, 00:18:45.339 "data_offset": 2048, 00:18:45.339 "data_size": 63488 00:18:45.339 } 00:18:45.339 ] 00:18:45.339 }' 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.339 "name": "raid_bdev1", 00:18:45.339 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:45.339 "strip_size_kb": 64, 00:18:45.339 "state": "online", 00:18:45.339 "raid_level": "raid5f", 00:18:45.339 "superblock": true, 00:18:45.339 "num_base_bdevs": 3, 00:18:45.339 "num_base_bdevs_discovered": 3, 00:18:45.339 "num_base_bdevs_operational": 3, 00:18:45.339 "base_bdevs_list": [ 00:18:45.339 { 00:18:45.339 "name": "spare", 00:18:45.339 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:45.339 "is_configured": true, 00:18:45.339 "data_offset": 2048, 00:18:45.339 "data_size": 63488 00:18:45.339 }, 00:18:45.339 { 00:18:45.339 "name": "BaseBdev2", 00:18:45.339 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:45.339 "is_configured": true, 00:18:45.339 "data_offset": 2048, 00:18:45.339 "data_size": 63488 00:18:45.339 }, 00:18:45.339 { 00:18:45.339 "name": "BaseBdev3", 00:18:45.339 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:45.339 "is_configured": true, 00:18:45.339 "data_offset": 2048, 00:18:45.339 "data_size": 63488 00:18:45.339 } 00:18:45.339 ] 00:18:45.339 }' 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.339 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.918 [2024-11-04 14:44:44.921285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.918 [2024-11-04 14:44:44.921339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.918 [2024-11-04 14:44:44.921459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.918 [2024-11-04 14:44:44.921588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.918 [2024-11-04 14:44:44.921653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:45.918 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:45.919 14:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:46.486 /dev/nbd0 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.486 1+0 records in 00:18:46.486 1+0 records out 00:18:46.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311808 s, 13.1 MB/s 00:18:46.486 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.487 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:46.487 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.487 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:46.487 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:46.487 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.487 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.487 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:46.745 /dev/nbd1 00:18:46.745 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:46.745 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:46.745 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:46.745 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.746 1+0 records in 00:18:46.746 1+0 records out 00:18:46.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386695 s, 10.6 MB/s 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:46.746 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:47.005 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:47.005 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:47.005 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:47.005 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:47.005 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:47.005 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.005 14:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:47.264 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:47.264 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:47.264 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:47.264 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.264 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.264 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.264 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:47.264 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.264 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.264 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.523 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.523 [2024-11-04 14:44:46.496233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:47.523 [2024-11-04 14:44:46.496318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.523 [2024-11-04 14:44:46.496353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:47.523 [2024-11-04 14:44:46.496371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.523 [2024-11-04 14:44:46.499542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.523 [2024-11-04 14:44:46.499607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:47.523 [2024-11-04 14:44:46.499718] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:47.523 [2024-11-04 14:44:46.499792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:47.523 [2024-11-04 14:44:46.500151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:47.523 [2024-11-04 14:44:46.500356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:47.524 spare 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.524 [2024-11-04 14:44:46.600570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:47.524 [2024-11-04 14:44:46.600624] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:18:47.524 [2024-11-04 14:44:46.601059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:18:47.524 [2024-11-04 14:44:46.606131] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:47.524 [2024-11-04 14:44:46.606158] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:47.524 [2024-11-04 14:44:46.606441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.524 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.783 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.783 "name": "raid_bdev1", 00:18:47.783 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:47.783 "strip_size_kb": 64, 00:18:47.783 "state": "online", 00:18:47.783 "raid_level": "raid5f", 00:18:47.783 "superblock": true, 00:18:47.783 "num_base_bdevs": 3, 00:18:47.783 "num_base_bdevs_discovered": 3, 00:18:47.783 "num_base_bdevs_operational": 3, 00:18:47.783 "base_bdevs_list": [ 00:18:47.783 { 00:18:47.783 "name": "spare", 00:18:47.783 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:47.783 "is_configured": true, 00:18:47.783 "data_offset": 2048, 00:18:47.783 "data_size": 63488 00:18:47.783 }, 00:18:47.783 { 00:18:47.783 "name": "BaseBdev2", 00:18:47.783 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:47.783 "is_configured": true, 00:18:47.783 "data_offset": 2048, 00:18:47.783 "data_size": 63488 00:18:47.783 }, 00:18:47.783 { 00:18:47.783 "name": "BaseBdev3", 00:18:47.783 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:47.783 "is_configured": true, 00:18:47.783 "data_offset": 2048, 00:18:47.783 "data_size": 63488 00:18:47.783 } 00:18:47.783 ] 00:18:47.783 }' 00:18:47.783 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.783 14:44:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.042 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.042 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.042 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:48.042 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:48.042 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.042 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.042 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.042 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.042 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.042 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.301 "name": "raid_bdev1", 00:18:48.301 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:48.301 "strip_size_kb": 64, 00:18:48.301 "state": "online", 00:18:48.301 "raid_level": "raid5f", 00:18:48.301 "superblock": true, 00:18:48.301 "num_base_bdevs": 3, 00:18:48.301 "num_base_bdevs_discovered": 3, 00:18:48.301 "num_base_bdevs_operational": 3, 00:18:48.301 "base_bdevs_list": [ 00:18:48.301 { 00:18:48.301 "name": "spare", 00:18:48.301 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:48.301 "is_configured": true, 00:18:48.301 "data_offset": 2048, 00:18:48.301 "data_size": 63488 00:18:48.301 }, 00:18:48.301 { 00:18:48.301 "name": "BaseBdev2", 00:18:48.301 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:48.301 "is_configured": true, 00:18:48.301 "data_offset": 2048, 00:18:48.301 "data_size": 63488 00:18:48.301 }, 00:18:48.301 { 00:18:48.301 "name": "BaseBdev3", 00:18:48.301 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:48.301 "is_configured": true, 00:18:48.301 "data_offset": 2048, 00:18:48.301 "data_size": 63488 00:18:48.301 } 00:18:48.301 ] 00:18:48.301 }' 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.301 [2024-11-04 14:44:47.356485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.301 "name": "raid_bdev1", 00:18:48.301 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:48.301 "strip_size_kb": 64, 00:18:48.301 "state": "online", 00:18:48.301 "raid_level": "raid5f", 00:18:48.301 "superblock": true, 00:18:48.301 "num_base_bdevs": 3, 00:18:48.301 "num_base_bdevs_discovered": 2, 00:18:48.301 "num_base_bdevs_operational": 2, 00:18:48.301 "base_bdevs_list": [ 00:18:48.301 { 00:18:48.301 "name": null, 00:18:48.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.301 "is_configured": false, 00:18:48.301 "data_offset": 0, 00:18:48.301 "data_size": 63488 00:18:48.301 }, 00:18:48.301 { 00:18:48.301 "name": "BaseBdev2", 00:18:48.301 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:48.301 "is_configured": true, 00:18:48.301 "data_offset": 2048, 00:18:48.301 "data_size": 63488 00:18:48.301 }, 00:18:48.301 { 00:18:48.301 "name": "BaseBdev3", 00:18:48.301 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:48.301 "is_configured": true, 00:18:48.301 "data_offset": 2048, 00:18:48.301 "data_size": 63488 00:18:48.301 } 00:18:48.301 ] 00:18:48.301 }' 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.301 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.868 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:48.868 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.868 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.868 [2024-11-04 14:44:47.908765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.868 [2024-11-04 14:44:47.909152] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:48.868 [2024-11-04 14:44:47.909294] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:48.868 [2024-11-04 14:44:47.909354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.868 [2024-11-04 14:44:47.924228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:18:48.868 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.868 14:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:48.868 [2024-11-04 14:44:47.931601] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:50.244 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:50.244 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.244 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:50.244 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:50.245 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.245 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.245 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.245 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.245 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.245 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.245 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.245 "name": "raid_bdev1", 00:18:50.245 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:50.245 "strip_size_kb": 64, 00:18:50.245 "state": "online", 00:18:50.245 "raid_level": "raid5f", 00:18:50.245 "superblock": true, 00:18:50.245 "num_base_bdevs": 3, 00:18:50.245 "num_base_bdevs_discovered": 3, 00:18:50.245 "num_base_bdevs_operational": 3, 00:18:50.245 "process": { 00:18:50.245 "type": "rebuild", 00:18:50.245 "target": "spare", 00:18:50.245 "progress": { 00:18:50.245 "blocks": 18432, 00:18:50.245 "percent": 14 00:18:50.245 } 00:18:50.245 }, 00:18:50.245 "base_bdevs_list": [ 00:18:50.245 { 00:18:50.245 "name": "spare", 00:18:50.245 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:50.245 "is_configured": true, 00:18:50.245 "data_offset": 2048, 00:18:50.245 "data_size": 63488 00:18:50.245 }, 00:18:50.245 { 00:18:50.245 "name": "BaseBdev2", 00:18:50.245 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:50.245 "is_configured": true, 00:18:50.245 "data_offset": 2048, 00:18:50.245 "data_size": 63488 00:18:50.245 }, 00:18:50.245 { 00:18:50.245 "name": "BaseBdev3", 00:18:50.245 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:50.245 "is_configured": true, 00:18:50.245 "data_offset": 2048, 00:18:50.245 "data_size": 63488 00:18:50.245 } 00:18:50.245 ] 00:18:50.245 }' 00:18:50.245 14:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.245 [2024-11-04 14:44:49.098003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.245 [2024-11-04 14:44:49.146250] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:50.245 [2024-11-04 14:44:49.146354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.245 [2024-11-04 14:44:49.146381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:50.245 [2024-11-04 14:44:49.146395] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.245 "name": "raid_bdev1", 00:18:50.245 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:50.245 "strip_size_kb": 64, 00:18:50.245 "state": "online", 00:18:50.245 "raid_level": "raid5f", 00:18:50.245 "superblock": true, 00:18:50.245 "num_base_bdevs": 3, 00:18:50.245 "num_base_bdevs_discovered": 2, 00:18:50.245 "num_base_bdevs_operational": 2, 00:18:50.245 "base_bdevs_list": [ 00:18:50.245 { 00:18:50.245 "name": null, 00:18:50.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.245 "is_configured": false, 00:18:50.245 "data_offset": 0, 00:18:50.245 "data_size": 63488 00:18:50.245 }, 00:18:50.245 { 00:18:50.245 "name": "BaseBdev2", 00:18:50.245 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:50.245 "is_configured": true, 00:18:50.245 "data_offset": 2048, 00:18:50.245 "data_size": 63488 00:18:50.245 }, 00:18:50.245 { 00:18:50.245 "name": "BaseBdev3", 00:18:50.245 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:50.245 "is_configured": true, 00:18:50.245 "data_offset": 2048, 00:18:50.245 "data_size": 63488 00:18:50.245 } 00:18:50.245 ] 00:18:50.245 }' 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.245 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.811 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:50.811 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.811 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.811 [2024-11-04 14:44:49.738175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:50.811 [2024-11-04 14:44:49.738267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.811 [2024-11-04 14:44:49.738298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:18:50.811 [2024-11-04 14:44:49.738319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.811 [2024-11-04 14:44:49.738908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.811 [2024-11-04 14:44:49.738976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:50.811 [2024-11-04 14:44:49.739110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:50.811 [2024-11-04 14:44:49.739138] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.811 [2024-11-04 14:44:49.739151] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:50.811 [2024-11-04 14:44:49.739185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.811 [2024-11-04 14:44:49.753729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:18:50.811 spare 00:18:50.811 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.811 14:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:50.811 [2024-11-04 14:44:49.760944] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.744 "name": "raid_bdev1", 00:18:51.744 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:51.744 "strip_size_kb": 64, 00:18:51.744 "state": "online", 00:18:51.744 "raid_level": "raid5f", 00:18:51.744 "superblock": true, 00:18:51.744 "num_base_bdevs": 3, 00:18:51.744 "num_base_bdevs_discovered": 3, 00:18:51.744 "num_base_bdevs_operational": 3, 00:18:51.744 "process": { 00:18:51.744 "type": "rebuild", 00:18:51.744 "target": "spare", 00:18:51.744 "progress": { 00:18:51.744 "blocks": 18432, 00:18:51.744 "percent": 14 00:18:51.744 } 00:18:51.744 }, 00:18:51.744 "base_bdevs_list": [ 00:18:51.744 { 00:18:51.744 "name": "spare", 00:18:51.744 "uuid": "18873d71-3586-5f40-a3a3-49658327de8e", 00:18:51.744 "is_configured": true, 00:18:51.744 "data_offset": 2048, 00:18:51.744 "data_size": 63488 00:18:51.744 }, 00:18:51.744 { 00:18:51.744 "name": "BaseBdev2", 00:18:51.744 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:51.744 "is_configured": true, 00:18:51.744 "data_offset": 2048, 00:18:51.744 "data_size": 63488 00:18:51.744 }, 00:18:51.744 { 00:18:51.744 "name": "BaseBdev3", 00:18:51.744 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:51.744 "is_configured": true, 00:18:51.744 "data_offset": 2048, 00:18:51.744 "data_size": 63488 00:18:51.744 } 00:18:51.744 ] 00:18:51.744 }' 00:18:51.744 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.002 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.002 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.002 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.002 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:52.002 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.002 14:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.002 [2024-11-04 14:44:50.931459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.002 [2024-11-04 14:44:50.975749] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:52.002 [2024-11-04 14:44:50.975858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.002 [2024-11-04 14:44:50.975890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.002 [2024-11-04 14:44:50.975901] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.002 "name": "raid_bdev1", 00:18:52.002 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:52.002 "strip_size_kb": 64, 00:18:52.002 "state": "online", 00:18:52.002 "raid_level": "raid5f", 00:18:52.002 "superblock": true, 00:18:52.002 "num_base_bdevs": 3, 00:18:52.002 "num_base_bdevs_discovered": 2, 00:18:52.002 "num_base_bdevs_operational": 2, 00:18:52.002 "base_bdevs_list": [ 00:18:52.002 { 00:18:52.002 "name": null, 00:18:52.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.002 "is_configured": false, 00:18:52.002 "data_offset": 0, 00:18:52.002 "data_size": 63488 00:18:52.002 }, 00:18:52.002 { 00:18:52.002 "name": "BaseBdev2", 00:18:52.002 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:52.002 "is_configured": true, 00:18:52.002 "data_offset": 2048, 00:18:52.002 "data_size": 63488 00:18:52.002 }, 00:18:52.002 { 00:18:52.002 "name": "BaseBdev3", 00:18:52.002 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:52.002 "is_configured": true, 00:18:52.002 "data_offset": 2048, 00:18:52.002 "data_size": 63488 00:18:52.002 } 00:18:52.002 ] 00:18:52.002 }' 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.002 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.712 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.712 "name": "raid_bdev1", 00:18:52.712 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:52.712 "strip_size_kb": 64, 00:18:52.712 "state": "online", 00:18:52.712 "raid_level": "raid5f", 00:18:52.712 "superblock": true, 00:18:52.712 "num_base_bdevs": 3, 00:18:52.712 "num_base_bdevs_discovered": 2, 00:18:52.712 "num_base_bdevs_operational": 2, 00:18:52.712 "base_bdevs_list": [ 00:18:52.712 { 00:18:52.712 "name": null, 00:18:52.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.712 "is_configured": false, 00:18:52.712 "data_offset": 0, 00:18:52.712 "data_size": 63488 00:18:52.712 }, 00:18:52.712 { 00:18:52.712 "name": "BaseBdev2", 00:18:52.713 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:52.713 "is_configured": true, 00:18:52.713 "data_offset": 2048, 00:18:52.713 "data_size": 63488 00:18:52.713 }, 00:18:52.713 { 00:18:52.713 "name": "BaseBdev3", 00:18:52.713 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:52.713 "is_configured": true, 00:18:52.713 "data_offset": 2048, 00:18:52.713 "data_size": 63488 00:18:52.713 } 00:18:52.713 ] 00:18:52.713 }' 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.713 [2024-11-04 14:44:51.711226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:52.713 [2024-11-04 14:44:51.711306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.713 [2024-11-04 14:44:51.711344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:52.713 [2024-11-04 14:44:51.711359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.713 [2024-11-04 14:44:51.711921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.713 [2024-11-04 14:44:51.711972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:52.713 [2024-11-04 14:44:51.712084] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:52.713 [2024-11-04 14:44:51.712106] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:52.713 [2024-11-04 14:44:51.712134] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:52.713 [2024-11-04 14:44:51.712146] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:52.713 BaseBdev1 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.713 14:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.648 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.906 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.906 "name": "raid_bdev1", 00:18:53.906 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:53.906 "strip_size_kb": 64, 00:18:53.906 "state": "online", 00:18:53.906 "raid_level": "raid5f", 00:18:53.906 "superblock": true, 00:18:53.906 "num_base_bdevs": 3, 00:18:53.906 "num_base_bdevs_discovered": 2, 00:18:53.906 "num_base_bdevs_operational": 2, 00:18:53.906 "base_bdevs_list": [ 00:18:53.906 { 00:18:53.906 "name": null, 00:18:53.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.906 "is_configured": false, 00:18:53.906 "data_offset": 0, 00:18:53.906 "data_size": 63488 00:18:53.906 }, 00:18:53.906 { 00:18:53.906 "name": "BaseBdev2", 00:18:53.906 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:53.906 "is_configured": true, 00:18:53.906 "data_offset": 2048, 00:18:53.906 "data_size": 63488 00:18:53.906 }, 00:18:53.906 { 00:18:53.906 "name": "BaseBdev3", 00:18:53.906 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:53.906 "is_configured": true, 00:18:53.906 "data_offset": 2048, 00:18:53.906 "data_size": 63488 00:18:53.906 } 00:18:53.906 ] 00:18:53.906 }' 00:18:53.906 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.906 14:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.164 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:54.164 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.164 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:54.164 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:54.164 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.164 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.164 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.164 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.164 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.164 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.423 "name": "raid_bdev1", 00:18:54.423 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:54.423 "strip_size_kb": 64, 00:18:54.423 "state": "online", 00:18:54.423 "raid_level": "raid5f", 00:18:54.423 "superblock": true, 00:18:54.423 "num_base_bdevs": 3, 00:18:54.423 "num_base_bdevs_discovered": 2, 00:18:54.423 "num_base_bdevs_operational": 2, 00:18:54.423 "base_bdevs_list": [ 00:18:54.423 { 00:18:54.423 "name": null, 00:18:54.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.423 "is_configured": false, 00:18:54.423 "data_offset": 0, 00:18:54.423 "data_size": 63488 00:18:54.423 }, 00:18:54.423 { 00:18:54.423 "name": "BaseBdev2", 00:18:54.423 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:54.423 "is_configured": true, 00:18:54.423 "data_offset": 2048, 00:18:54.423 "data_size": 63488 00:18:54.423 }, 00:18:54.423 { 00:18:54.423 "name": "BaseBdev3", 00:18:54.423 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:54.423 "is_configured": true, 00:18:54.423 "data_offset": 2048, 00:18:54.423 "data_size": 63488 00:18:54.423 } 00:18:54.423 ] 00:18:54.423 }' 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:54.423 [2024-11-04 14:44:53.395726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.423 [2024-11-04 14:44:53.395957] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:54.423 [2024-11-04 14:44:53.395984] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:54.423 request: 00:18:54.423 { 00:18:54.423 "base_bdev": "BaseBdev1", 00:18:54.423 "raid_bdev": "raid_bdev1", 00:18:54.423 "method": "bdev_raid_add_base_bdev", 00:18:54.423 "req_id": 1 00:18:54.423 } 00:18:54.423 Got JSON-RPC error response 00:18:54.423 response: 00:18:54.423 { 00:18:54.423 "code": -22, 00:18:54.423 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:54.423 } 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:54.423 14:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.358 "name": "raid_bdev1", 00:18:55.358 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:55.358 "strip_size_kb": 64, 00:18:55.358 "state": "online", 00:18:55.358 "raid_level": "raid5f", 00:18:55.358 "superblock": true, 00:18:55.358 "num_base_bdevs": 3, 00:18:55.358 "num_base_bdevs_discovered": 2, 00:18:55.358 "num_base_bdevs_operational": 2, 00:18:55.358 "base_bdevs_list": [ 00:18:55.358 { 00:18:55.358 "name": null, 00:18:55.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.358 "is_configured": false, 00:18:55.358 "data_offset": 0, 00:18:55.358 "data_size": 63488 00:18:55.358 }, 00:18:55.358 { 00:18:55.358 "name": "BaseBdev2", 00:18:55.358 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:55.358 "is_configured": true, 00:18:55.358 "data_offset": 2048, 00:18:55.358 "data_size": 63488 00:18:55.358 }, 00:18:55.358 { 00:18:55.358 "name": "BaseBdev3", 00:18:55.358 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:55.358 "is_configured": true, 00:18:55.358 "data_offset": 2048, 00:18:55.358 "data_size": 63488 00:18:55.358 } 00:18:55.358 ] 00:18:55.358 }' 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.358 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.924 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.924 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.924 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.924 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.924 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.924 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.924 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.924 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.924 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.924 14:44:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.924 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.924 "name": "raid_bdev1", 00:18:55.924 "uuid": "17c54954-d22b-4465-ad75-d51d6589d513", 00:18:55.924 "strip_size_kb": 64, 00:18:55.924 "state": "online", 00:18:55.924 "raid_level": "raid5f", 00:18:55.924 "superblock": true, 00:18:55.924 "num_base_bdevs": 3, 00:18:55.924 "num_base_bdevs_discovered": 2, 00:18:55.924 "num_base_bdevs_operational": 2, 00:18:55.924 "base_bdevs_list": [ 00:18:55.924 { 00:18:55.924 "name": null, 00:18:55.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.924 "is_configured": false, 00:18:55.924 "data_offset": 0, 00:18:55.924 "data_size": 63488 00:18:55.924 }, 00:18:55.924 { 00:18:55.924 "name": "BaseBdev2", 00:18:55.924 "uuid": "eee2ced9-4baf-5d93-bbaf-a4b0655db1dd", 00:18:55.924 "is_configured": true, 00:18:55.924 "data_offset": 2048, 00:18:55.924 "data_size": 63488 00:18:55.924 }, 00:18:55.924 { 00:18:55.924 "name": "BaseBdev3", 00:18:55.924 "uuid": "974e9e31-c817-5eed-9389-7010a0ac18b0", 00:18:55.924 "is_configured": true, 00:18:55.924 "data_offset": 2048, 00:18:55.924 "data_size": 63488 00:18:55.924 } 00:18:55.925 ] 00:18:55.925 }' 00:18:55.925 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82396 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82396 ']' 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82396 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82396 00:18:56.183 killing process with pid 82396 00:18:56.183 Received shutdown signal, test time was about 60.000000 seconds 00:18:56.183 00:18:56.183 Latency(us) 00:18:56.183 [2024-11-04T14:44:55.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.183 [2024-11-04T14:44:55.306Z] =================================================================================================================== 00:18:56.183 [2024-11-04T14:44:55.306Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82396' 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82396 00:18:56.183 [2024-11-04 14:44:55.154650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.183 14:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82396 00:18:56.183 [2024-11-04 14:44:55.154802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.183 [2024-11-04 14:44:55.154884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.183 [2024-11-04 14:44:55.154903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:56.442 [2024-11-04 14:44:55.509715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.816 14:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:57.816 00:18:57.816 real 0m25.128s 00:18:57.816 user 0m33.676s 00:18:57.816 sys 0m2.559s 00:18:57.816 14:44:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:57.816 ************************************ 00:18:57.816 END TEST raid5f_rebuild_test_sb 00:18:57.816 ************************************ 00:18:57.816 14:44:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:57.816 14:44:56 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:57.816 14:44:56 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:18:57.816 14:44:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:57.816 14:44:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:57.816 14:44:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.816 ************************************ 00:18:57.816 START TEST raid5f_state_function_test 00:18:57.816 ************************************ 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83161 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:57.816 Process raid pid: 83161 00:18:57.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83161' 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83161 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83161 ']' 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:57.816 14:44:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.816 [2024-11-04 14:44:56.700058] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:18:57.816 [2024-11-04 14:44:56.700204] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.816 [2024-11-04 14:44:56.872109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.074 [2024-11-04 14:44:57.003405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.333 [2024-11-04 14:44:57.209304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.333 [2024-11-04 14:44:57.209355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.592 [2024-11-04 14:44:57.644141] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.592 [2024-11-04 14:44:57.644212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.592 [2024-11-04 14:44:57.644231] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.592 [2024-11-04 14:44:57.644247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.592 [2024-11-04 14:44:57.644257] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:58.592 [2024-11-04 14:44:57.644271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:58.592 [2024-11-04 14:44:57.644281] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:58.592 [2024-11-04 14:44:57.644294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.592 "name": "Existed_Raid", 00:18:58.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.592 "strip_size_kb": 64, 00:18:58.592 "state": "configuring", 00:18:58.592 "raid_level": "raid5f", 00:18:58.592 "superblock": false, 00:18:58.592 "num_base_bdevs": 4, 00:18:58.592 "num_base_bdevs_discovered": 0, 00:18:58.592 "num_base_bdevs_operational": 4, 00:18:58.592 "base_bdevs_list": [ 00:18:58.592 { 00:18:58.592 "name": "BaseBdev1", 00:18:58.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.592 "is_configured": false, 00:18:58.592 "data_offset": 0, 00:18:58.592 "data_size": 0 00:18:58.592 }, 00:18:58.592 { 00:18:58.592 "name": "BaseBdev2", 00:18:58.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.592 "is_configured": false, 00:18:58.592 "data_offset": 0, 00:18:58.592 "data_size": 0 00:18:58.592 }, 00:18:58.592 { 00:18:58.592 "name": "BaseBdev3", 00:18:58.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.592 "is_configured": false, 00:18:58.592 "data_offset": 0, 00:18:58.592 "data_size": 0 00:18:58.592 }, 00:18:58.592 { 00:18:58.592 "name": "BaseBdev4", 00:18:58.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.592 "is_configured": false, 00:18:58.592 "data_offset": 0, 00:18:58.592 "data_size": 0 00:18:58.592 } 00:18:58.592 ] 00:18:58.592 }' 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.592 14:44:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.160 [2024-11-04 14:44:58.160223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.160 [2024-11-04 14:44:58.160448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.160 [2024-11-04 14:44:58.168228] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.160 [2024-11-04 14:44:58.168287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.160 [2024-11-04 14:44:58.168304] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.160 [2024-11-04 14:44:58.168320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.160 [2024-11-04 14:44:58.168330] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:59.160 [2024-11-04 14:44:58.168343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:59.160 [2024-11-04 14:44:58.168353] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:59.160 [2024-11-04 14:44:58.168366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.160 [2024-11-04 14:44:58.213032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.160 BaseBdev1 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.160 [ 00:18:59.160 { 00:18:59.160 "name": "BaseBdev1", 00:18:59.160 "aliases": [ 00:18:59.160 "d1f6d19a-7279-4093-8ac6-a30cc1f6a4e6" 00:18:59.160 ], 00:18:59.160 "product_name": "Malloc disk", 00:18:59.160 "block_size": 512, 00:18:59.160 "num_blocks": 65536, 00:18:59.160 "uuid": "d1f6d19a-7279-4093-8ac6-a30cc1f6a4e6", 00:18:59.160 "assigned_rate_limits": { 00:18:59.160 "rw_ios_per_sec": 0, 00:18:59.160 "rw_mbytes_per_sec": 0, 00:18:59.160 "r_mbytes_per_sec": 0, 00:18:59.160 "w_mbytes_per_sec": 0 00:18:59.160 }, 00:18:59.160 "claimed": true, 00:18:59.160 "claim_type": "exclusive_write", 00:18:59.160 "zoned": false, 00:18:59.160 "supported_io_types": { 00:18:59.160 "read": true, 00:18:59.160 "write": true, 00:18:59.160 "unmap": true, 00:18:59.160 "flush": true, 00:18:59.160 "reset": true, 00:18:59.160 "nvme_admin": false, 00:18:59.160 "nvme_io": false, 00:18:59.160 "nvme_io_md": false, 00:18:59.160 "write_zeroes": true, 00:18:59.160 "zcopy": true, 00:18:59.160 "get_zone_info": false, 00:18:59.160 "zone_management": false, 00:18:59.160 "zone_append": false, 00:18:59.160 "compare": false, 00:18:59.160 "compare_and_write": false, 00:18:59.160 "abort": true, 00:18:59.160 "seek_hole": false, 00:18:59.160 "seek_data": false, 00:18:59.160 "copy": true, 00:18:59.160 "nvme_iov_md": false 00:18:59.160 }, 00:18:59.160 "memory_domains": [ 00:18:59.160 { 00:18:59.160 "dma_device_id": "system", 00:18:59.160 "dma_device_type": 1 00:18:59.160 }, 00:18:59.160 { 00:18:59.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.160 "dma_device_type": 2 00:18:59.160 } 00:18:59.160 ], 00:18:59.160 "driver_specific": {} 00:18:59.160 } 00:18:59.160 ] 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.160 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.419 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.419 "name": "Existed_Raid", 00:18:59.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.419 "strip_size_kb": 64, 00:18:59.419 "state": "configuring", 00:18:59.419 "raid_level": "raid5f", 00:18:59.419 "superblock": false, 00:18:59.419 "num_base_bdevs": 4, 00:18:59.419 "num_base_bdevs_discovered": 1, 00:18:59.419 "num_base_bdevs_operational": 4, 00:18:59.419 "base_bdevs_list": [ 00:18:59.419 { 00:18:59.419 "name": "BaseBdev1", 00:18:59.419 "uuid": "d1f6d19a-7279-4093-8ac6-a30cc1f6a4e6", 00:18:59.419 "is_configured": true, 00:18:59.419 "data_offset": 0, 00:18:59.419 "data_size": 65536 00:18:59.419 }, 00:18:59.419 { 00:18:59.419 "name": "BaseBdev2", 00:18:59.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.419 "is_configured": false, 00:18:59.419 "data_offset": 0, 00:18:59.419 "data_size": 0 00:18:59.419 }, 00:18:59.419 { 00:18:59.419 "name": "BaseBdev3", 00:18:59.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.419 "is_configured": false, 00:18:59.419 "data_offset": 0, 00:18:59.419 "data_size": 0 00:18:59.419 }, 00:18:59.419 { 00:18:59.419 "name": "BaseBdev4", 00:18:59.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.419 "is_configured": false, 00:18:59.419 "data_offset": 0, 00:18:59.419 "data_size": 0 00:18:59.419 } 00:18:59.419 ] 00:18:59.419 }' 00:18:59.419 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.419 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.677 [2024-11-04 14:44:58.753221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.677 [2024-11-04 14:44:58.753288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.677 [2024-11-04 14:44:58.761266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.677 [2024-11-04 14:44:58.763760] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.677 [2024-11-04 14:44:58.763819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.677 [2024-11-04 14:44:58.763836] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:59.677 [2024-11-04 14:44:58.763854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:59.677 [2024-11-04 14:44:58.763864] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:59.677 [2024-11-04 14:44:58.763877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.677 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.678 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.678 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.678 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.936 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.936 "name": "Existed_Raid", 00:18:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.936 "strip_size_kb": 64, 00:18:59.936 "state": "configuring", 00:18:59.936 "raid_level": "raid5f", 00:18:59.936 "superblock": false, 00:18:59.936 "num_base_bdevs": 4, 00:18:59.936 "num_base_bdevs_discovered": 1, 00:18:59.936 "num_base_bdevs_operational": 4, 00:18:59.936 "base_bdevs_list": [ 00:18:59.936 { 00:18:59.936 "name": "BaseBdev1", 00:18:59.936 "uuid": "d1f6d19a-7279-4093-8ac6-a30cc1f6a4e6", 00:18:59.936 "is_configured": true, 00:18:59.936 "data_offset": 0, 00:18:59.936 "data_size": 65536 00:18:59.936 }, 00:18:59.936 { 00:18:59.936 "name": "BaseBdev2", 00:18:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.936 "is_configured": false, 00:18:59.936 "data_offset": 0, 00:18:59.936 "data_size": 0 00:18:59.936 }, 00:18:59.936 { 00:18:59.936 "name": "BaseBdev3", 00:18:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.936 "is_configured": false, 00:18:59.936 "data_offset": 0, 00:18:59.936 "data_size": 0 00:18:59.936 }, 00:18:59.936 { 00:18:59.936 "name": "BaseBdev4", 00:18:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.936 "is_configured": false, 00:18:59.936 "data_offset": 0, 00:18:59.936 "data_size": 0 00:18:59.936 } 00:18:59.936 ] 00:18:59.936 }' 00:18:59.936 14:44:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.936 14:44:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.194 [2024-11-04 14:44:59.287345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.194 BaseBdev2 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.194 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.194 [ 00:19:00.194 { 00:19:00.194 "name": "BaseBdev2", 00:19:00.194 "aliases": [ 00:19:00.194 "efc9d4d6-c172-4eef-b1e2-735e70b373fd" 00:19:00.194 ], 00:19:00.194 "product_name": "Malloc disk", 00:19:00.194 "block_size": 512, 00:19:00.194 "num_blocks": 65536, 00:19:00.194 "uuid": "efc9d4d6-c172-4eef-b1e2-735e70b373fd", 00:19:00.194 "assigned_rate_limits": { 00:19:00.194 "rw_ios_per_sec": 0, 00:19:00.194 "rw_mbytes_per_sec": 0, 00:19:00.194 "r_mbytes_per_sec": 0, 00:19:00.194 "w_mbytes_per_sec": 0 00:19:00.194 }, 00:19:00.194 "claimed": true, 00:19:00.194 "claim_type": "exclusive_write", 00:19:00.194 "zoned": false, 00:19:00.194 "supported_io_types": { 00:19:00.194 "read": true, 00:19:00.194 "write": true, 00:19:00.194 "unmap": true, 00:19:00.194 "flush": true, 00:19:00.194 "reset": true, 00:19:00.194 "nvme_admin": false, 00:19:00.194 "nvme_io": false, 00:19:00.194 "nvme_io_md": false, 00:19:00.194 "write_zeroes": true, 00:19:00.194 "zcopy": true, 00:19:00.194 "get_zone_info": false, 00:19:00.194 "zone_management": false, 00:19:00.194 "zone_append": false, 00:19:00.194 "compare": false, 00:19:00.453 "compare_and_write": false, 00:19:00.453 "abort": true, 00:19:00.453 "seek_hole": false, 00:19:00.453 "seek_data": false, 00:19:00.453 "copy": true, 00:19:00.453 "nvme_iov_md": false 00:19:00.453 }, 00:19:00.453 "memory_domains": [ 00:19:00.453 { 00:19:00.453 "dma_device_id": "system", 00:19:00.453 "dma_device_type": 1 00:19:00.453 }, 00:19:00.453 { 00:19:00.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.453 "dma_device_type": 2 00:19:00.453 } 00:19:00.453 ], 00:19:00.453 "driver_specific": {} 00:19:00.453 } 00:19:00.453 ] 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.453 "name": "Existed_Raid", 00:19:00.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.453 "strip_size_kb": 64, 00:19:00.453 "state": "configuring", 00:19:00.453 "raid_level": "raid5f", 00:19:00.453 "superblock": false, 00:19:00.453 "num_base_bdevs": 4, 00:19:00.453 "num_base_bdevs_discovered": 2, 00:19:00.453 "num_base_bdevs_operational": 4, 00:19:00.453 "base_bdevs_list": [ 00:19:00.453 { 00:19:00.453 "name": "BaseBdev1", 00:19:00.453 "uuid": "d1f6d19a-7279-4093-8ac6-a30cc1f6a4e6", 00:19:00.453 "is_configured": true, 00:19:00.453 "data_offset": 0, 00:19:00.453 "data_size": 65536 00:19:00.453 }, 00:19:00.453 { 00:19:00.453 "name": "BaseBdev2", 00:19:00.453 "uuid": "efc9d4d6-c172-4eef-b1e2-735e70b373fd", 00:19:00.453 "is_configured": true, 00:19:00.453 "data_offset": 0, 00:19:00.453 "data_size": 65536 00:19:00.453 }, 00:19:00.453 { 00:19:00.453 "name": "BaseBdev3", 00:19:00.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.453 "is_configured": false, 00:19:00.453 "data_offset": 0, 00:19:00.453 "data_size": 0 00:19:00.453 }, 00:19:00.453 { 00:19:00.453 "name": "BaseBdev4", 00:19:00.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.453 "is_configured": false, 00:19:00.453 "data_offset": 0, 00:19:00.453 "data_size": 0 00:19:00.453 } 00:19:00.453 ] 00:19:00.453 }' 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.453 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.712 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:00.712 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.712 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.971 [2024-11-04 14:44:59.871546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:00.971 BaseBdev3 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.971 [ 00:19:00.971 { 00:19:00.971 "name": "BaseBdev3", 00:19:00.971 "aliases": [ 00:19:00.971 "9b042e46-e322-4a39-8ce1-56b3c0341774" 00:19:00.971 ], 00:19:00.971 "product_name": "Malloc disk", 00:19:00.971 "block_size": 512, 00:19:00.971 "num_blocks": 65536, 00:19:00.971 "uuid": "9b042e46-e322-4a39-8ce1-56b3c0341774", 00:19:00.971 "assigned_rate_limits": { 00:19:00.971 "rw_ios_per_sec": 0, 00:19:00.971 "rw_mbytes_per_sec": 0, 00:19:00.971 "r_mbytes_per_sec": 0, 00:19:00.971 "w_mbytes_per_sec": 0 00:19:00.971 }, 00:19:00.971 "claimed": true, 00:19:00.971 "claim_type": "exclusive_write", 00:19:00.971 "zoned": false, 00:19:00.971 "supported_io_types": { 00:19:00.971 "read": true, 00:19:00.971 "write": true, 00:19:00.971 "unmap": true, 00:19:00.971 "flush": true, 00:19:00.971 "reset": true, 00:19:00.971 "nvme_admin": false, 00:19:00.971 "nvme_io": false, 00:19:00.971 "nvme_io_md": false, 00:19:00.971 "write_zeroes": true, 00:19:00.971 "zcopy": true, 00:19:00.971 "get_zone_info": false, 00:19:00.971 "zone_management": false, 00:19:00.971 "zone_append": false, 00:19:00.971 "compare": false, 00:19:00.971 "compare_and_write": false, 00:19:00.971 "abort": true, 00:19:00.971 "seek_hole": false, 00:19:00.971 "seek_data": false, 00:19:00.971 "copy": true, 00:19:00.971 "nvme_iov_md": false 00:19:00.971 }, 00:19:00.971 "memory_domains": [ 00:19:00.971 { 00:19:00.971 "dma_device_id": "system", 00:19:00.971 "dma_device_type": 1 00:19:00.971 }, 00:19:00.971 { 00:19:00.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.971 "dma_device_type": 2 00:19:00.971 } 00:19:00.971 ], 00:19:00.971 "driver_specific": {} 00:19:00.971 } 00:19:00.971 ] 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.971 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.972 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.972 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.972 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.972 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.972 "name": "Existed_Raid", 00:19:00.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.972 "strip_size_kb": 64, 00:19:00.972 "state": "configuring", 00:19:00.972 "raid_level": "raid5f", 00:19:00.972 "superblock": false, 00:19:00.972 "num_base_bdevs": 4, 00:19:00.972 "num_base_bdevs_discovered": 3, 00:19:00.972 "num_base_bdevs_operational": 4, 00:19:00.972 "base_bdevs_list": [ 00:19:00.972 { 00:19:00.972 "name": "BaseBdev1", 00:19:00.972 "uuid": "d1f6d19a-7279-4093-8ac6-a30cc1f6a4e6", 00:19:00.972 "is_configured": true, 00:19:00.972 "data_offset": 0, 00:19:00.972 "data_size": 65536 00:19:00.972 }, 00:19:00.972 { 00:19:00.972 "name": "BaseBdev2", 00:19:00.972 "uuid": "efc9d4d6-c172-4eef-b1e2-735e70b373fd", 00:19:00.972 "is_configured": true, 00:19:00.972 "data_offset": 0, 00:19:00.972 "data_size": 65536 00:19:00.972 }, 00:19:00.972 { 00:19:00.972 "name": "BaseBdev3", 00:19:00.972 "uuid": "9b042e46-e322-4a39-8ce1-56b3c0341774", 00:19:00.972 "is_configured": true, 00:19:00.972 "data_offset": 0, 00:19:00.972 "data_size": 65536 00:19:00.972 }, 00:19:00.972 { 00:19:00.972 "name": "BaseBdev4", 00:19:00.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.972 "is_configured": false, 00:19:00.972 "data_offset": 0, 00:19:00.972 "data_size": 0 00:19:00.972 } 00:19:00.972 ] 00:19:00.972 }' 00:19:00.972 14:44:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.972 14:44:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.540 [2024-11-04 14:45:00.469814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:01.540 [2024-11-04 14:45:00.470202] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:01.540 [2024-11-04 14:45:00.470228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:01.540 [2024-11-04 14:45:00.470573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:01.540 [2024-11-04 14:45:00.477414] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:01.540 [2024-11-04 14:45:00.477582] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:01.540 [2024-11-04 14:45:00.477992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.540 BaseBdev4 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.540 [ 00:19:01.540 { 00:19:01.540 "name": "BaseBdev4", 00:19:01.540 "aliases": [ 00:19:01.540 "1178dfc0-b66e-4973-9814-1e826fc15d26" 00:19:01.540 ], 00:19:01.540 "product_name": "Malloc disk", 00:19:01.540 "block_size": 512, 00:19:01.540 "num_blocks": 65536, 00:19:01.540 "uuid": "1178dfc0-b66e-4973-9814-1e826fc15d26", 00:19:01.540 "assigned_rate_limits": { 00:19:01.540 "rw_ios_per_sec": 0, 00:19:01.540 "rw_mbytes_per_sec": 0, 00:19:01.540 "r_mbytes_per_sec": 0, 00:19:01.540 "w_mbytes_per_sec": 0 00:19:01.540 }, 00:19:01.540 "claimed": true, 00:19:01.540 "claim_type": "exclusive_write", 00:19:01.540 "zoned": false, 00:19:01.540 "supported_io_types": { 00:19:01.540 "read": true, 00:19:01.540 "write": true, 00:19:01.540 "unmap": true, 00:19:01.540 "flush": true, 00:19:01.540 "reset": true, 00:19:01.540 "nvme_admin": false, 00:19:01.540 "nvme_io": false, 00:19:01.540 "nvme_io_md": false, 00:19:01.540 "write_zeroes": true, 00:19:01.540 "zcopy": true, 00:19:01.540 "get_zone_info": false, 00:19:01.540 "zone_management": false, 00:19:01.540 "zone_append": false, 00:19:01.540 "compare": false, 00:19:01.540 "compare_and_write": false, 00:19:01.540 "abort": true, 00:19:01.540 "seek_hole": false, 00:19:01.540 "seek_data": false, 00:19:01.540 "copy": true, 00:19:01.540 "nvme_iov_md": false 00:19:01.540 }, 00:19:01.540 "memory_domains": [ 00:19:01.540 { 00:19:01.540 "dma_device_id": "system", 00:19:01.540 "dma_device_type": 1 00:19:01.540 }, 00:19:01.540 { 00:19:01.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.540 "dma_device_type": 2 00:19:01.540 } 00:19:01.540 ], 00:19:01.540 "driver_specific": {} 00:19:01.540 } 00:19:01.540 ] 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.540 "name": "Existed_Raid", 00:19:01.540 "uuid": "c507cbfe-9b63-4fe8-a293-d4d654e95b96", 00:19:01.540 "strip_size_kb": 64, 00:19:01.540 "state": "online", 00:19:01.540 "raid_level": "raid5f", 00:19:01.540 "superblock": false, 00:19:01.540 "num_base_bdevs": 4, 00:19:01.540 "num_base_bdevs_discovered": 4, 00:19:01.540 "num_base_bdevs_operational": 4, 00:19:01.540 "base_bdevs_list": [ 00:19:01.540 { 00:19:01.540 "name": "BaseBdev1", 00:19:01.540 "uuid": "d1f6d19a-7279-4093-8ac6-a30cc1f6a4e6", 00:19:01.540 "is_configured": true, 00:19:01.540 "data_offset": 0, 00:19:01.540 "data_size": 65536 00:19:01.540 }, 00:19:01.540 { 00:19:01.540 "name": "BaseBdev2", 00:19:01.540 "uuid": "efc9d4d6-c172-4eef-b1e2-735e70b373fd", 00:19:01.540 "is_configured": true, 00:19:01.540 "data_offset": 0, 00:19:01.540 "data_size": 65536 00:19:01.540 }, 00:19:01.540 { 00:19:01.540 "name": "BaseBdev3", 00:19:01.540 "uuid": "9b042e46-e322-4a39-8ce1-56b3c0341774", 00:19:01.540 "is_configured": true, 00:19:01.540 "data_offset": 0, 00:19:01.540 "data_size": 65536 00:19:01.540 }, 00:19:01.540 { 00:19:01.540 "name": "BaseBdev4", 00:19:01.540 "uuid": "1178dfc0-b66e-4973-9814-1e826fc15d26", 00:19:01.540 "is_configured": true, 00:19:01.540 "data_offset": 0, 00:19:01.540 "data_size": 65536 00:19:01.540 } 00:19:01.540 ] 00:19:01.540 }' 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.540 14:45:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.107 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:02.107 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:02.107 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:02.107 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:02.107 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:02.107 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:02.107 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:02.108 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:02.108 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.108 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.108 [2024-11-04 14:45:01.085761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.108 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.108 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.108 "name": "Existed_Raid", 00:19:02.108 "aliases": [ 00:19:02.108 "c507cbfe-9b63-4fe8-a293-d4d654e95b96" 00:19:02.108 ], 00:19:02.108 "product_name": "Raid Volume", 00:19:02.108 "block_size": 512, 00:19:02.108 "num_blocks": 196608, 00:19:02.108 "uuid": "c507cbfe-9b63-4fe8-a293-d4d654e95b96", 00:19:02.108 "assigned_rate_limits": { 00:19:02.108 "rw_ios_per_sec": 0, 00:19:02.108 "rw_mbytes_per_sec": 0, 00:19:02.108 "r_mbytes_per_sec": 0, 00:19:02.108 "w_mbytes_per_sec": 0 00:19:02.108 }, 00:19:02.108 "claimed": false, 00:19:02.108 "zoned": false, 00:19:02.108 "supported_io_types": { 00:19:02.108 "read": true, 00:19:02.108 "write": true, 00:19:02.108 "unmap": false, 00:19:02.108 "flush": false, 00:19:02.108 "reset": true, 00:19:02.108 "nvme_admin": false, 00:19:02.108 "nvme_io": false, 00:19:02.108 "nvme_io_md": false, 00:19:02.108 "write_zeroes": true, 00:19:02.108 "zcopy": false, 00:19:02.108 "get_zone_info": false, 00:19:02.108 "zone_management": false, 00:19:02.108 "zone_append": false, 00:19:02.108 "compare": false, 00:19:02.108 "compare_and_write": false, 00:19:02.108 "abort": false, 00:19:02.108 "seek_hole": false, 00:19:02.108 "seek_data": false, 00:19:02.108 "copy": false, 00:19:02.108 "nvme_iov_md": false 00:19:02.108 }, 00:19:02.108 "driver_specific": { 00:19:02.108 "raid": { 00:19:02.108 "uuid": "c507cbfe-9b63-4fe8-a293-d4d654e95b96", 00:19:02.108 "strip_size_kb": 64, 00:19:02.108 "state": "online", 00:19:02.108 "raid_level": "raid5f", 00:19:02.108 "superblock": false, 00:19:02.108 "num_base_bdevs": 4, 00:19:02.108 "num_base_bdevs_discovered": 4, 00:19:02.108 "num_base_bdevs_operational": 4, 00:19:02.108 "base_bdevs_list": [ 00:19:02.108 { 00:19:02.108 "name": "BaseBdev1", 00:19:02.108 "uuid": "d1f6d19a-7279-4093-8ac6-a30cc1f6a4e6", 00:19:02.108 "is_configured": true, 00:19:02.108 "data_offset": 0, 00:19:02.108 "data_size": 65536 00:19:02.108 }, 00:19:02.108 { 00:19:02.108 "name": "BaseBdev2", 00:19:02.108 "uuid": "efc9d4d6-c172-4eef-b1e2-735e70b373fd", 00:19:02.108 "is_configured": true, 00:19:02.108 "data_offset": 0, 00:19:02.108 "data_size": 65536 00:19:02.108 }, 00:19:02.108 { 00:19:02.108 "name": "BaseBdev3", 00:19:02.108 "uuid": "9b042e46-e322-4a39-8ce1-56b3c0341774", 00:19:02.108 "is_configured": true, 00:19:02.108 "data_offset": 0, 00:19:02.108 "data_size": 65536 00:19:02.108 }, 00:19:02.108 { 00:19:02.108 "name": "BaseBdev4", 00:19:02.108 "uuid": "1178dfc0-b66e-4973-9814-1e826fc15d26", 00:19:02.108 "is_configured": true, 00:19:02.108 "data_offset": 0, 00:19:02.108 "data_size": 65536 00:19:02.108 } 00:19:02.108 ] 00:19:02.108 } 00:19:02.108 } 00:19:02.108 }' 00:19:02.108 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:02.108 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:02.108 BaseBdev2 00:19:02.108 BaseBdev3 00:19:02.108 BaseBdev4' 00:19:02.108 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.366 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:02.366 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.367 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.625 [2024-11-04 14:45:01.497698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.625 "name": "Existed_Raid", 00:19:02.625 "uuid": "c507cbfe-9b63-4fe8-a293-d4d654e95b96", 00:19:02.625 "strip_size_kb": 64, 00:19:02.625 "state": "online", 00:19:02.625 "raid_level": "raid5f", 00:19:02.625 "superblock": false, 00:19:02.625 "num_base_bdevs": 4, 00:19:02.625 "num_base_bdevs_discovered": 3, 00:19:02.625 "num_base_bdevs_operational": 3, 00:19:02.625 "base_bdevs_list": [ 00:19:02.625 { 00:19:02.625 "name": null, 00:19:02.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.625 "is_configured": false, 00:19:02.625 "data_offset": 0, 00:19:02.625 "data_size": 65536 00:19:02.625 }, 00:19:02.625 { 00:19:02.625 "name": "BaseBdev2", 00:19:02.625 "uuid": "efc9d4d6-c172-4eef-b1e2-735e70b373fd", 00:19:02.625 "is_configured": true, 00:19:02.625 "data_offset": 0, 00:19:02.625 "data_size": 65536 00:19:02.625 }, 00:19:02.625 { 00:19:02.625 "name": "BaseBdev3", 00:19:02.625 "uuid": "9b042e46-e322-4a39-8ce1-56b3c0341774", 00:19:02.625 "is_configured": true, 00:19:02.625 "data_offset": 0, 00:19:02.625 "data_size": 65536 00:19:02.625 }, 00:19:02.625 { 00:19:02.625 "name": "BaseBdev4", 00:19:02.625 "uuid": "1178dfc0-b66e-4973-9814-1e826fc15d26", 00:19:02.625 "is_configured": true, 00:19:02.625 "data_offset": 0, 00:19:02.625 "data_size": 65536 00:19:02.625 } 00:19:02.625 ] 00:19:02.625 }' 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.625 14:45:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.191 [2024-11-04 14:45:02.174632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:03.191 [2024-11-04 14:45:02.174771] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.191 [2024-11-04 14:45:02.261049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:03.191 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.450 [2024-11-04 14:45:02.329112] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.450 [2024-11-04 14:45:02.479325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:03.450 [2024-11-04 14:45:02.479392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:03.450 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.709 BaseBdev2 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.709 [ 00:19:03.709 { 00:19:03.709 "name": "BaseBdev2", 00:19:03.709 "aliases": [ 00:19:03.709 "cc4df765-6094-4e67-9938-969513e0cff6" 00:19:03.709 ], 00:19:03.709 "product_name": "Malloc disk", 00:19:03.709 "block_size": 512, 00:19:03.709 "num_blocks": 65536, 00:19:03.709 "uuid": "cc4df765-6094-4e67-9938-969513e0cff6", 00:19:03.709 "assigned_rate_limits": { 00:19:03.709 "rw_ios_per_sec": 0, 00:19:03.709 "rw_mbytes_per_sec": 0, 00:19:03.709 "r_mbytes_per_sec": 0, 00:19:03.709 "w_mbytes_per_sec": 0 00:19:03.709 }, 00:19:03.709 "claimed": false, 00:19:03.709 "zoned": false, 00:19:03.709 "supported_io_types": { 00:19:03.709 "read": true, 00:19:03.709 "write": true, 00:19:03.709 "unmap": true, 00:19:03.709 "flush": true, 00:19:03.709 "reset": true, 00:19:03.709 "nvme_admin": false, 00:19:03.709 "nvme_io": false, 00:19:03.709 "nvme_io_md": false, 00:19:03.709 "write_zeroes": true, 00:19:03.709 "zcopy": true, 00:19:03.709 "get_zone_info": false, 00:19:03.709 "zone_management": false, 00:19:03.709 "zone_append": false, 00:19:03.709 "compare": false, 00:19:03.709 "compare_and_write": false, 00:19:03.709 "abort": true, 00:19:03.709 "seek_hole": false, 00:19:03.709 "seek_data": false, 00:19:03.709 "copy": true, 00:19:03.709 "nvme_iov_md": false 00:19:03.709 }, 00:19:03.709 "memory_domains": [ 00:19:03.709 { 00:19:03.709 "dma_device_id": "system", 00:19:03.709 "dma_device_type": 1 00:19:03.709 }, 00:19:03.709 { 00:19:03.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.709 "dma_device_type": 2 00:19:03.709 } 00:19:03.709 ], 00:19:03.709 "driver_specific": {} 00:19:03.709 } 00:19:03.709 ] 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.709 BaseBdev3 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:03.709 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.710 [ 00:19:03.710 { 00:19:03.710 "name": "BaseBdev3", 00:19:03.710 "aliases": [ 00:19:03.710 "57cb28ee-932f-4131-a9b9-e281e5acecaa" 00:19:03.710 ], 00:19:03.710 "product_name": "Malloc disk", 00:19:03.710 "block_size": 512, 00:19:03.710 "num_blocks": 65536, 00:19:03.710 "uuid": "57cb28ee-932f-4131-a9b9-e281e5acecaa", 00:19:03.710 "assigned_rate_limits": { 00:19:03.710 "rw_ios_per_sec": 0, 00:19:03.710 "rw_mbytes_per_sec": 0, 00:19:03.710 "r_mbytes_per_sec": 0, 00:19:03.710 "w_mbytes_per_sec": 0 00:19:03.710 }, 00:19:03.710 "claimed": false, 00:19:03.710 "zoned": false, 00:19:03.710 "supported_io_types": { 00:19:03.710 "read": true, 00:19:03.710 "write": true, 00:19:03.710 "unmap": true, 00:19:03.710 "flush": true, 00:19:03.710 "reset": true, 00:19:03.710 "nvme_admin": false, 00:19:03.710 "nvme_io": false, 00:19:03.710 "nvme_io_md": false, 00:19:03.710 "write_zeroes": true, 00:19:03.710 "zcopy": true, 00:19:03.710 "get_zone_info": false, 00:19:03.710 "zone_management": false, 00:19:03.710 "zone_append": false, 00:19:03.710 "compare": false, 00:19:03.710 "compare_and_write": false, 00:19:03.710 "abort": true, 00:19:03.710 "seek_hole": false, 00:19:03.710 "seek_data": false, 00:19:03.710 "copy": true, 00:19:03.710 "nvme_iov_md": false 00:19:03.710 }, 00:19:03.710 "memory_domains": [ 00:19:03.710 { 00:19:03.710 "dma_device_id": "system", 00:19:03.710 "dma_device_type": 1 00:19:03.710 }, 00:19:03.710 { 00:19:03.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.710 "dma_device_type": 2 00:19:03.710 } 00:19:03.710 ], 00:19:03.710 "driver_specific": {} 00:19:03.710 } 00:19:03.710 ] 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.710 BaseBdev4 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.710 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.710 [ 00:19:03.710 { 00:19:03.710 "name": "BaseBdev4", 00:19:03.710 "aliases": [ 00:19:03.710 "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d" 00:19:03.710 ], 00:19:03.710 "product_name": "Malloc disk", 00:19:03.710 "block_size": 512, 00:19:03.710 "num_blocks": 65536, 00:19:03.710 "uuid": "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d", 00:19:03.710 "assigned_rate_limits": { 00:19:03.710 "rw_ios_per_sec": 0, 00:19:03.710 "rw_mbytes_per_sec": 0, 00:19:03.710 "r_mbytes_per_sec": 0, 00:19:03.710 "w_mbytes_per_sec": 0 00:19:03.710 }, 00:19:03.710 "claimed": false, 00:19:03.710 "zoned": false, 00:19:03.710 "supported_io_types": { 00:19:03.710 "read": true, 00:19:03.710 "write": true, 00:19:03.710 "unmap": true, 00:19:03.710 "flush": true, 00:19:03.710 "reset": true, 00:19:03.710 "nvme_admin": false, 00:19:03.710 "nvme_io": false, 00:19:03.710 "nvme_io_md": false, 00:19:03.710 "write_zeroes": true, 00:19:03.710 "zcopy": true, 00:19:03.710 "get_zone_info": false, 00:19:03.710 "zone_management": false, 00:19:03.710 "zone_append": false, 00:19:03.710 "compare": false, 00:19:03.710 "compare_and_write": false, 00:19:03.710 "abort": true, 00:19:03.710 "seek_hole": false, 00:19:03.710 "seek_data": false, 00:19:03.710 "copy": true, 00:19:03.710 "nvme_iov_md": false 00:19:03.710 }, 00:19:03.969 "memory_domains": [ 00:19:03.969 { 00:19:03.969 "dma_device_id": "system", 00:19:03.969 "dma_device_type": 1 00:19:03.969 }, 00:19:03.969 { 00:19:03.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.969 "dma_device_type": 2 00:19:03.969 } 00:19:03.969 ], 00:19:03.969 "driver_specific": {} 00:19:03.969 } 00:19:03.969 ] 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.969 [2024-11-04 14:45:02.838160] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:03.969 [2024-11-04 14:45:02.838223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:03.969 [2024-11-04 14:45:02.838264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:03.969 [2024-11-04 14:45:02.840725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:03.969 [2024-11-04 14:45:02.840957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.969 "name": "Existed_Raid", 00:19:03.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.969 "strip_size_kb": 64, 00:19:03.969 "state": "configuring", 00:19:03.969 "raid_level": "raid5f", 00:19:03.969 "superblock": false, 00:19:03.969 "num_base_bdevs": 4, 00:19:03.969 "num_base_bdevs_discovered": 3, 00:19:03.969 "num_base_bdevs_operational": 4, 00:19:03.969 "base_bdevs_list": [ 00:19:03.969 { 00:19:03.969 "name": "BaseBdev1", 00:19:03.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.969 "is_configured": false, 00:19:03.969 "data_offset": 0, 00:19:03.969 "data_size": 0 00:19:03.969 }, 00:19:03.969 { 00:19:03.969 "name": "BaseBdev2", 00:19:03.969 "uuid": "cc4df765-6094-4e67-9938-969513e0cff6", 00:19:03.969 "is_configured": true, 00:19:03.969 "data_offset": 0, 00:19:03.969 "data_size": 65536 00:19:03.969 }, 00:19:03.969 { 00:19:03.969 "name": "BaseBdev3", 00:19:03.969 "uuid": "57cb28ee-932f-4131-a9b9-e281e5acecaa", 00:19:03.969 "is_configured": true, 00:19:03.969 "data_offset": 0, 00:19:03.969 "data_size": 65536 00:19:03.969 }, 00:19:03.969 { 00:19:03.969 "name": "BaseBdev4", 00:19:03.969 "uuid": "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d", 00:19:03.969 "is_configured": true, 00:19:03.969 "data_offset": 0, 00:19:03.969 "data_size": 65536 00:19:03.969 } 00:19:03.969 ] 00:19:03.969 }' 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.969 14:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.535 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:04.535 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.535 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.536 [2024-11-04 14:45:03.354284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.536 "name": "Existed_Raid", 00:19:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.536 "strip_size_kb": 64, 00:19:04.536 "state": "configuring", 00:19:04.536 "raid_level": "raid5f", 00:19:04.536 "superblock": false, 00:19:04.536 "num_base_bdevs": 4, 00:19:04.536 "num_base_bdevs_discovered": 2, 00:19:04.536 "num_base_bdevs_operational": 4, 00:19:04.536 "base_bdevs_list": [ 00:19:04.536 { 00:19:04.536 "name": "BaseBdev1", 00:19:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.536 "is_configured": false, 00:19:04.536 "data_offset": 0, 00:19:04.536 "data_size": 0 00:19:04.536 }, 00:19:04.536 { 00:19:04.536 "name": null, 00:19:04.536 "uuid": "cc4df765-6094-4e67-9938-969513e0cff6", 00:19:04.536 "is_configured": false, 00:19:04.536 "data_offset": 0, 00:19:04.536 "data_size": 65536 00:19:04.536 }, 00:19:04.536 { 00:19:04.536 "name": "BaseBdev3", 00:19:04.536 "uuid": "57cb28ee-932f-4131-a9b9-e281e5acecaa", 00:19:04.536 "is_configured": true, 00:19:04.536 "data_offset": 0, 00:19:04.536 "data_size": 65536 00:19:04.536 }, 00:19:04.536 { 00:19:04.536 "name": "BaseBdev4", 00:19:04.536 "uuid": "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d", 00:19:04.536 "is_configured": true, 00:19:04.536 "data_offset": 0, 00:19:04.536 "data_size": 65536 00:19:04.536 } 00:19:04.536 ] 00:19:04.536 }' 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.536 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.795 [2024-11-04 14:45:03.904020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:04.795 BaseBdev1 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.795 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.053 [ 00:19:05.053 { 00:19:05.053 "name": "BaseBdev1", 00:19:05.053 "aliases": [ 00:19:05.053 "681b676d-3bf7-4a54-8c70-a8e8bd74e903" 00:19:05.053 ], 00:19:05.053 "product_name": "Malloc disk", 00:19:05.053 "block_size": 512, 00:19:05.053 "num_blocks": 65536, 00:19:05.053 "uuid": "681b676d-3bf7-4a54-8c70-a8e8bd74e903", 00:19:05.053 "assigned_rate_limits": { 00:19:05.053 "rw_ios_per_sec": 0, 00:19:05.053 "rw_mbytes_per_sec": 0, 00:19:05.053 "r_mbytes_per_sec": 0, 00:19:05.053 "w_mbytes_per_sec": 0 00:19:05.053 }, 00:19:05.053 "claimed": true, 00:19:05.053 "claim_type": "exclusive_write", 00:19:05.053 "zoned": false, 00:19:05.053 "supported_io_types": { 00:19:05.053 "read": true, 00:19:05.053 "write": true, 00:19:05.053 "unmap": true, 00:19:05.053 "flush": true, 00:19:05.053 "reset": true, 00:19:05.053 "nvme_admin": false, 00:19:05.053 "nvme_io": false, 00:19:05.053 "nvme_io_md": false, 00:19:05.053 "write_zeroes": true, 00:19:05.053 "zcopy": true, 00:19:05.053 "get_zone_info": false, 00:19:05.053 "zone_management": false, 00:19:05.053 "zone_append": false, 00:19:05.053 "compare": false, 00:19:05.053 "compare_and_write": false, 00:19:05.053 "abort": true, 00:19:05.053 "seek_hole": false, 00:19:05.053 "seek_data": false, 00:19:05.053 "copy": true, 00:19:05.053 "nvme_iov_md": false 00:19:05.053 }, 00:19:05.053 "memory_domains": [ 00:19:05.053 { 00:19:05.053 "dma_device_id": "system", 00:19:05.053 "dma_device_type": 1 00:19:05.053 }, 00:19:05.053 { 00:19:05.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.053 "dma_device_type": 2 00:19:05.053 } 00:19:05.053 ], 00:19:05.053 "driver_specific": {} 00:19:05.053 } 00:19:05.053 ] 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.053 "name": "Existed_Raid", 00:19:05.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.053 "strip_size_kb": 64, 00:19:05.053 "state": "configuring", 00:19:05.053 "raid_level": "raid5f", 00:19:05.053 "superblock": false, 00:19:05.053 "num_base_bdevs": 4, 00:19:05.053 "num_base_bdevs_discovered": 3, 00:19:05.053 "num_base_bdevs_operational": 4, 00:19:05.053 "base_bdevs_list": [ 00:19:05.053 { 00:19:05.053 "name": "BaseBdev1", 00:19:05.053 "uuid": "681b676d-3bf7-4a54-8c70-a8e8bd74e903", 00:19:05.053 "is_configured": true, 00:19:05.053 "data_offset": 0, 00:19:05.053 "data_size": 65536 00:19:05.053 }, 00:19:05.053 { 00:19:05.053 "name": null, 00:19:05.053 "uuid": "cc4df765-6094-4e67-9938-969513e0cff6", 00:19:05.053 "is_configured": false, 00:19:05.053 "data_offset": 0, 00:19:05.053 "data_size": 65536 00:19:05.053 }, 00:19:05.053 { 00:19:05.053 "name": "BaseBdev3", 00:19:05.053 "uuid": "57cb28ee-932f-4131-a9b9-e281e5acecaa", 00:19:05.053 "is_configured": true, 00:19:05.053 "data_offset": 0, 00:19:05.053 "data_size": 65536 00:19:05.053 }, 00:19:05.053 { 00:19:05.053 "name": "BaseBdev4", 00:19:05.053 "uuid": "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d", 00:19:05.053 "is_configured": true, 00:19:05.053 "data_offset": 0, 00:19:05.053 "data_size": 65536 00:19:05.053 } 00:19:05.053 ] 00:19:05.053 }' 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.053 14:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.310 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:05.310 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.310 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.311 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.568 [2024-11-04 14:45:04.476339] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.568 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.568 "name": "Existed_Raid", 00:19:05.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.568 "strip_size_kb": 64, 00:19:05.568 "state": "configuring", 00:19:05.568 "raid_level": "raid5f", 00:19:05.568 "superblock": false, 00:19:05.568 "num_base_bdevs": 4, 00:19:05.568 "num_base_bdevs_discovered": 2, 00:19:05.568 "num_base_bdevs_operational": 4, 00:19:05.568 "base_bdevs_list": [ 00:19:05.568 { 00:19:05.568 "name": "BaseBdev1", 00:19:05.568 "uuid": "681b676d-3bf7-4a54-8c70-a8e8bd74e903", 00:19:05.568 "is_configured": true, 00:19:05.568 "data_offset": 0, 00:19:05.568 "data_size": 65536 00:19:05.568 }, 00:19:05.568 { 00:19:05.568 "name": null, 00:19:05.568 "uuid": "cc4df765-6094-4e67-9938-969513e0cff6", 00:19:05.568 "is_configured": false, 00:19:05.568 "data_offset": 0, 00:19:05.568 "data_size": 65536 00:19:05.568 }, 00:19:05.568 { 00:19:05.568 "name": null, 00:19:05.568 "uuid": "57cb28ee-932f-4131-a9b9-e281e5acecaa", 00:19:05.568 "is_configured": false, 00:19:05.568 "data_offset": 0, 00:19:05.568 "data_size": 65536 00:19:05.568 }, 00:19:05.568 { 00:19:05.568 "name": "BaseBdev4", 00:19:05.568 "uuid": "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d", 00:19:05.568 "is_configured": true, 00:19:05.568 "data_offset": 0, 00:19:05.568 "data_size": 65536 00:19:05.568 } 00:19:05.568 ] 00:19:05.568 }' 00:19:05.569 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.569 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.134 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:06.134 14:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.134 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.134 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.134 14:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.134 [2024-11-04 14:45:05.016476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.134 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.135 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.135 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.135 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.135 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.135 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.135 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.135 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.135 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.135 "name": "Existed_Raid", 00:19:06.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.135 "strip_size_kb": 64, 00:19:06.135 "state": "configuring", 00:19:06.135 "raid_level": "raid5f", 00:19:06.135 "superblock": false, 00:19:06.135 "num_base_bdevs": 4, 00:19:06.135 "num_base_bdevs_discovered": 3, 00:19:06.135 "num_base_bdevs_operational": 4, 00:19:06.135 "base_bdevs_list": [ 00:19:06.135 { 00:19:06.135 "name": "BaseBdev1", 00:19:06.135 "uuid": "681b676d-3bf7-4a54-8c70-a8e8bd74e903", 00:19:06.135 "is_configured": true, 00:19:06.135 "data_offset": 0, 00:19:06.135 "data_size": 65536 00:19:06.135 }, 00:19:06.135 { 00:19:06.135 "name": null, 00:19:06.135 "uuid": "cc4df765-6094-4e67-9938-969513e0cff6", 00:19:06.135 "is_configured": false, 00:19:06.135 "data_offset": 0, 00:19:06.135 "data_size": 65536 00:19:06.135 }, 00:19:06.135 { 00:19:06.135 "name": "BaseBdev3", 00:19:06.135 "uuid": "57cb28ee-932f-4131-a9b9-e281e5acecaa", 00:19:06.135 "is_configured": true, 00:19:06.135 "data_offset": 0, 00:19:06.135 "data_size": 65536 00:19:06.135 }, 00:19:06.135 { 00:19:06.135 "name": "BaseBdev4", 00:19:06.135 "uuid": "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d", 00:19:06.135 "is_configured": true, 00:19:06.135 "data_offset": 0, 00:19:06.135 "data_size": 65536 00:19:06.135 } 00:19:06.135 ] 00:19:06.135 }' 00:19:06.135 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.135 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.418 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.418 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.418 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:06.418 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.418 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.418 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:06.418 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:06.418 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.418 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.418 [2024-11-04 14:45:05.520637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.676 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.676 "name": "Existed_Raid", 00:19:06.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.676 "strip_size_kb": 64, 00:19:06.676 "state": "configuring", 00:19:06.676 "raid_level": "raid5f", 00:19:06.676 "superblock": false, 00:19:06.676 "num_base_bdevs": 4, 00:19:06.676 "num_base_bdevs_discovered": 2, 00:19:06.676 "num_base_bdevs_operational": 4, 00:19:06.676 "base_bdevs_list": [ 00:19:06.676 { 00:19:06.676 "name": null, 00:19:06.676 "uuid": "681b676d-3bf7-4a54-8c70-a8e8bd74e903", 00:19:06.676 "is_configured": false, 00:19:06.677 "data_offset": 0, 00:19:06.677 "data_size": 65536 00:19:06.677 }, 00:19:06.677 { 00:19:06.677 "name": null, 00:19:06.677 "uuid": "cc4df765-6094-4e67-9938-969513e0cff6", 00:19:06.677 "is_configured": false, 00:19:06.677 "data_offset": 0, 00:19:06.677 "data_size": 65536 00:19:06.677 }, 00:19:06.677 { 00:19:06.677 "name": "BaseBdev3", 00:19:06.677 "uuid": "57cb28ee-932f-4131-a9b9-e281e5acecaa", 00:19:06.677 "is_configured": true, 00:19:06.677 "data_offset": 0, 00:19:06.677 "data_size": 65536 00:19:06.677 }, 00:19:06.677 { 00:19:06.677 "name": "BaseBdev4", 00:19:06.677 "uuid": "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d", 00:19:06.677 "is_configured": true, 00:19:06.677 "data_offset": 0, 00:19:06.677 "data_size": 65536 00:19:06.677 } 00:19:06.677 ] 00:19:06.677 }' 00:19:06.677 14:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.677 14:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.243 [2024-11-04 14:45:06.150686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.243 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.243 "name": "Existed_Raid", 00:19:07.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.243 "strip_size_kb": 64, 00:19:07.243 "state": "configuring", 00:19:07.243 "raid_level": "raid5f", 00:19:07.243 "superblock": false, 00:19:07.244 "num_base_bdevs": 4, 00:19:07.244 "num_base_bdevs_discovered": 3, 00:19:07.244 "num_base_bdevs_operational": 4, 00:19:07.244 "base_bdevs_list": [ 00:19:07.244 { 00:19:07.244 "name": null, 00:19:07.244 "uuid": "681b676d-3bf7-4a54-8c70-a8e8bd74e903", 00:19:07.244 "is_configured": false, 00:19:07.244 "data_offset": 0, 00:19:07.244 "data_size": 65536 00:19:07.244 }, 00:19:07.244 { 00:19:07.244 "name": "BaseBdev2", 00:19:07.244 "uuid": "cc4df765-6094-4e67-9938-969513e0cff6", 00:19:07.244 "is_configured": true, 00:19:07.244 "data_offset": 0, 00:19:07.244 "data_size": 65536 00:19:07.244 }, 00:19:07.244 { 00:19:07.244 "name": "BaseBdev3", 00:19:07.244 "uuid": "57cb28ee-932f-4131-a9b9-e281e5acecaa", 00:19:07.244 "is_configured": true, 00:19:07.244 "data_offset": 0, 00:19:07.244 "data_size": 65536 00:19:07.244 }, 00:19:07.244 { 00:19:07.244 "name": "BaseBdev4", 00:19:07.244 "uuid": "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d", 00:19:07.244 "is_configured": true, 00:19:07.244 "data_offset": 0, 00:19:07.244 "data_size": 65536 00:19:07.244 } 00:19:07.244 ] 00:19:07.244 }' 00:19:07.244 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.244 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 681b676d-3bf7-4a54-8c70-a8e8bd74e903 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.811 [2024-11-04 14:45:06.844478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:07.811 [2024-11-04 14:45:06.844761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:07.811 [2024-11-04 14:45:06.844785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:07.811 [2024-11-04 14:45:06.845163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:07.811 [2024-11-04 14:45:06.851790] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:07.811 [2024-11-04 14:45:06.851997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:07.811 [2024-11-04 14:45:06.852497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.811 NewBaseBdev 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.811 [ 00:19:07.811 { 00:19:07.811 "name": "NewBaseBdev", 00:19:07.811 "aliases": [ 00:19:07.811 "681b676d-3bf7-4a54-8c70-a8e8bd74e903" 00:19:07.811 ], 00:19:07.811 "product_name": "Malloc disk", 00:19:07.811 "block_size": 512, 00:19:07.811 "num_blocks": 65536, 00:19:07.811 "uuid": "681b676d-3bf7-4a54-8c70-a8e8bd74e903", 00:19:07.811 "assigned_rate_limits": { 00:19:07.811 "rw_ios_per_sec": 0, 00:19:07.811 "rw_mbytes_per_sec": 0, 00:19:07.811 "r_mbytes_per_sec": 0, 00:19:07.811 "w_mbytes_per_sec": 0 00:19:07.811 }, 00:19:07.811 "claimed": true, 00:19:07.811 "claim_type": "exclusive_write", 00:19:07.811 "zoned": false, 00:19:07.811 "supported_io_types": { 00:19:07.811 "read": true, 00:19:07.811 "write": true, 00:19:07.811 "unmap": true, 00:19:07.811 "flush": true, 00:19:07.811 "reset": true, 00:19:07.811 "nvme_admin": false, 00:19:07.811 "nvme_io": false, 00:19:07.811 "nvme_io_md": false, 00:19:07.811 "write_zeroes": true, 00:19:07.811 "zcopy": true, 00:19:07.811 "get_zone_info": false, 00:19:07.811 "zone_management": false, 00:19:07.811 "zone_append": false, 00:19:07.811 "compare": false, 00:19:07.811 "compare_and_write": false, 00:19:07.811 "abort": true, 00:19:07.811 "seek_hole": false, 00:19:07.811 "seek_data": false, 00:19:07.811 "copy": true, 00:19:07.811 "nvme_iov_md": false 00:19:07.811 }, 00:19:07.811 "memory_domains": [ 00:19:07.811 { 00:19:07.811 "dma_device_id": "system", 00:19:07.811 "dma_device_type": 1 00:19:07.811 }, 00:19:07.811 { 00:19:07.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.811 "dma_device_type": 2 00:19:07.811 } 00:19:07.811 ], 00:19:07.811 "driver_specific": {} 00:19:07.811 } 00:19:07.811 ] 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.811 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.812 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.070 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.070 "name": "Existed_Raid", 00:19:08.070 "uuid": "3f727628-c9f9-4fe2-a14b-4220f8f2c22a", 00:19:08.070 "strip_size_kb": 64, 00:19:08.070 "state": "online", 00:19:08.070 "raid_level": "raid5f", 00:19:08.070 "superblock": false, 00:19:08.070 "num_base_bdevs": 4, 00:19:08.070 "num_base_bdevs_discovered": 4, 00:19:08.070 "num_base_bdevs_operational": 4, 00:19:08.070 "base_bdevs_list": [ 00:19:08.070 { 00:19:08.070 "name": "NewBaseBdev", 00:19:08.070 "uuid": "681b676d-3bf7-4a54-8c70-a8e8bd74e903", 00:19:08.070 "is_configured": true, 00:19:08.070 "data_offset": 0, 00:19:08.070 "data_size": 65536 00:19:08.070 }, 00:19:08.070 { 00:19:08.070 "name": "BaseBdev2", 00:19:08.070 "uuid": "cc4df765-6094-4e67-9938-969513e0cff6", 00:19:08.070 "is_configured": true, 00:19:08.070 "data_offset": 0, 00:19:08.070 "data_size": 65536 00:19:08.070 }, 00:19:08.070 { 00:19:08.070 "name": "BaseBdev3", 00:19:08.070 "uuid": "57cb28ee-932f-4131-a9b9-e281e5acecaa", 00:19:08.070 "is_configured": true, 00:19:08.070 "data_offset": 0, 00:19:08.070 "data_size": 65536 00:19:08.070 }, 00:19:08.070 { 00:19:08.070 "name": "BaseBdev4", 00:19:08.070 "uuid": "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d", 00:19:08.070 "is_configured": true, 00:19:08.070 "data_offset": 0, 00:19:08.070 "data_size": 65536 00:19:08.070 } 00:19:08.070 ] 00:19:08.070 }' 00:19:08.070 14:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.070 14:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.653 [2024-11-04 14:45:07.468513] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.653 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:08.653 "name": "Existed_Raid", 00:19:08.653 "aliases": [ 00:19:08.653 "3f727628-c9f9-4fe2-a14b-4220f8f2c22a" 00:19:08.653 ], 00:19:08.653 "product_name": "Raid Volume", 00:19:08.653 "block_size": 512, 00:19:08.653 "num_blocks": 196608, 00:19:08.653 "uuid": "3f727628-c9f9-4fe2-a14b-4220f8f2c22a", 00:19:08.653 "assigned_rate_limits": { 00:19:08.653 "rw_ios_per_sec": 0, 00:19:08.653 "rw_mbytes_per_sec": 0, 00:19:08.653 "r_mbytes_per_sec": 0, 00:19:08.653 "w_mbytes_per_sec": 0 00:19:08.653 }, 00:19:08.653 "claimed": false, 00:19:08.653 "zoned": false, 00:19:08.653 "supported_io_types": { 00:19:08.653 "read": true, 00:19:08.653 "write": true, 00:19:08.653 "unmap": false, 00:19:08.653 "flush": false, 00:19:08.653 "reset": true, 00:19:08.653 "nvme_admin": false, 00:19:08.653 "nvme_io": false, 00:19:08.653 "nvme_io_md": false, 00:19:08.653 "write_zeroes": true, 00:19:08.653 "zcopy": false, 00:19:08.653 "get_zone_info": false, 00:19:08.653 "zone_management": false, 00:19:08.653 "zone_append": false, 00:19:08.653 "compare": false, 00:19:08.653 "compare_and_write": false, 00:19:08.653 "abort": false, 00:19:08.653 "seek_hole": false, 00:19:08.653 "seek_data": false, 00:19:08.653 "copy": false, 00:19:08.653 "nvme_iov_md": false 00:19:08.653 }, 00:19:08.653 "driver_specific": { 00:19:08.653 "raid": { 00:19:08.653 "uuid": "3f727628-c9f9-4fe2-a14b-4220f8f2c22a", 00:19:08.653 "strip_size_kb": 64, 00:19:08.653 "state": "online", 00:19:08.653 "raid_level": "raid5f", 00:19:08.653 "superblock": false, 00:19:08.653 "num_base_bdevs": 4, 00:19:08.654 "num_base_bdevs_discovered": 4, 00:19:08.654 "num_base_bdevs_operational": 4, 00:19:08.654 "base_bdevs_list": [ 00:19:08.654 { 00:19:08.654 "name": "NewBaseBdev", 00:19:08.654 "uuid": "681b676d-3bf7-4a54-8c70-a8e8bd74e903", 00:19:08.654 "is_configured": true, 00:19:08.654 "data_offset": 0, 00:19:08.654 "data_size": 65536 00:19:08.654 }, 00:19:08.654 { 00:19:08.654 "name": "BaseBdev2", 00:19:08.654 "uuid": "cc4df765-6094-4e67-9938-969513e0cff6", 00:19:08.654 "is_configured": true, 00:19:08.654 "data_offset": 0, 00:19:08.654 "data_size": 65536 00:19:08.654 }, 00:19:08.654 { 00:19:08.654 "name": "BaseBdev3", 00:19:08.654 "uuid": "57cb28ee-932f-4131-a9b9-e281e5acecaa", 00:19:08.654 "is_configured": true, 00:19:08.654 "data_offset": 0, 00:19:08.654 "data_size": 65536 00:19:08.654 }, 00:19:08.654 { 00:19:08.654 "name": "BaseBdev4", 00:19:08.654 "uuid": "e6053f7e-f9cb-43e5-8c38-88f5fcdaba9d", 00:19:08.654 "is_configured": true, 00:19:08.654 "data_offset": 0, 00:19:08.654 "data_size": 65536 00:19:08.654 } 00:19:08.654 ] 00:19:08.654 } 00:19:08.654 } 00:19:08.654 }' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:08.654 BaseBdev2 00:19:08.654 BaseBdev3 00:19:08.654 BaseBdev4' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.654 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:08.934 [2024-11-04 14:45:07.804306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:08.934 [2024-11-04 14:45:07.804346] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.934 [2024-11-04 14:45:07.804448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.934 [2024-11-04 14:45:07.804825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.934 [2024-11-04 14:45:07.804843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83161 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83161 ']' 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83161 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83161 00:19:08.934 killing process with pid 83161 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83161' 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 83161 00:19:08.934 [2024-11-04 14:45:07.839898] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.934 14:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 83161 00:19:09.205 [2024-11-04 14:45:08.198312] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.138 14:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:10.139 00:19:10.139 real 0m12.649s 00:19:10.139 user 0m20.882s 00:19:10.139 sys 0m1.806s 00:19:10.139 14:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:10.139 14:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.139 ************************************ 00:19:10.139 END TEST raid5f_state_function_test 00:19:10.139 ************************************ 00:19:10.397 14:45:09 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:19:10.397 14:45:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:10.397 14:45:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:10.397 14:45:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.397 ************************************ 00:19:10.397 START TEST raid5f_state_function_test_sb 00:19:10.397 ************************************ 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:10.397 Process raid pid: 83842 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83842 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83842' 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83842 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83842 ']' 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:10.397 14:45:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.397 [2024-11-04 14:45:09.407854] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:19:10.397 [2024-11-04 14:45:09.408298] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.655 [2024-11-04 14:45:09.585022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.655 [2024-11-04 14:45:09.715077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.913 [2024-11-04 14:45:09.921648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.913 [2024-11-04 14:45:09.921704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.495 [2024-11-04 14:45:10.459744] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:11.495 [2024-11-04 14:45:10.459820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:11.495 [2024-11-04 14:45:10.459838] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.495 [2024-11-04 14:45:10.459856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.495 [2024-11-04 14:45:10.459866] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:11.495 [2024-11-04 14:45:10.459882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:11.495 [2024-11-04 14:45:10.459892] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:11.495 [2024-11-04 14:45:10.459906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.495 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.495 "name": "Existed_Raid", 00:19:11.495 "uuid": "34c9f23b-cd7e-4cba-9f9c-1dd16f8e9b91", 00:19:11.495 "strip_size_kb": 64, 00:19:11.495 "state": "configuring", 00:19:11.495 "raid_level": "raid5f", 00:19:11.495 "superblock": true, 00:19:11.495 "num_base_bdevs": 4, 00:19:11.495 "num_base_bdevs_discovered": 0, 00:19:11.495 "num_base_bdevs_operational": 4, 00:19:11.495 "base_bdevs_list": [ 00:19:11.495 { 00:19:11.495 "name": "BaseBdev1", 00:19:11.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.495 "is_configured": false, 00:19:11.495 "data_offset": 0, 00:19:11.495 "data_size": 0 00:19:11.495 }, 00:19:11.495 { 00:19:11.495 "name": "BaseBdev2", 00:19:11.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.495 "is_configured": false, 00:19:11.495 "data_offset": 0, 00:19:11.495 "data_size": 0 00:19:11.495 }, 00:19:11.495 { 00:19:11.495 "name": "BaseBdev3", 00:19:11.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.495 "is_configured": false, 00:19:11.495 "data_offset": 0, 00:19:11.496 "data_size": 0 00:19:11.496 }, 00:19:11.496 { 00:19:11.496 "name": "BaseBdev4", 00:19:11.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.496 "is_configured": false, 00:19:11.496 "data_offset": 0, 00:19:11.496 "data_size": 0 00:19:11.496 } 00:19:11.496 ] 00:19:11.496 }' 00:19:11.496 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.496 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.062 [2024-11-04 14:45:10.959776] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.062 [2024-11-04 14:45:10.959826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.062 [2024-11-04 14:45:10.967785] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.062 [2024-11-04 14:45:10.967843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.062 [2024-11-04 14:45:10.967858] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.062 [2024-11-04 14:45:10.967875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.062 [2024-11-04 14:45:10.967885] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:12.062 [2024-11-04 14:45:10.967900] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:12.062 [2024-11-04 14:45:10.967909] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:12.062 [2024-11-04 14:45:10.967942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.062 14:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.062 [2024-11-04 14:45:11.012493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.062 BaseBdev1 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.062 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.062 [ 00:19:12.062 { 00:19:12.062 "name": "BaseBdev1", 00:19:12.062 "aliases": [ 00:19:12.062 "6266b6a3-64e3-4d49-a493-7812e7128605" 00:19:12.062 ], 00:19:12.062 "product_name": "Malloc disk", 00:19:12.062 "block_size": 512, 00:19:12.062 "num_blocks": 65536, 00:19:12.062 "uuid": "6266b6a3-64e3-4d49-a493-7812e7128605", 00:19:12.062 "assigned_rate_limits": { 00:19:12.062 "rw_ios_per_sec": 0, 00:19:12.062 "rw_mbytes_per_sec": 0, 00:19:12.062 "r_mbytes_per_sec": 0, 00:19:12.062 "w_mbytes_per_sec": 0 00:19:12.062 }, 00:19:12.062 "claimed": true, 00:19:12.062 "claim_type": "exclusive_write", 00:19:12.062 "zoned": false, 00:19:12.062 "supported_io_types": { 00:19:12.062 "read": true, 00:19:12.062 "write": true, 00:19:12.062 "unmap": true, 00:19:12.062 "flush": true, 00:19:12.062 "reset": true, 00:19:12.062 "nvme_admin": false, 00:19:12.062 "nvme_io": false, 00:19:12.062 "nvme_io_md": false, 00:19:12.062 "write_zeroes": true, 00:19:12.062 "zcopy": true, 00:19:12.062 "get_zone_info": false, 00:19:12.062 "zone_management": false, 00:19:12.062 "zone_append": false, 00:19:12.062 "compare": false, 00:19:12.062 "compare_and_write": false, 00:19:12.062 "abort": true, 00:19:12.062 "seek_hole": false, 00:19:12.062 "seek_data": false, 00:19:12.062 "copy": true, 00:19:12.062 "nvme_iov_md": false 00:19:12.062 }, 00:19:12.062 "memory_domains": [ 00:19:12.062 { 00:19:12.062 "dma_device_id": "system", 00:19:12.062 "dma_device_type": 1 00:19:12.062 }, 00:19:12.062 { 00:19:12.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.062 "dma_device_type": 2 00:19:12.062 } 00:19:12.062 ], 00:19:12.062 "driver_specific": {} 00:19:12.062 } 00:19:12.062 ] 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.063 "name": "Existed_Raid", 00:19:12.063 "uuid": "97f6af0f-22b1-4fdb-a394-e4ca6c7d3a17", 00:19:12.063 "strip_size_kb": 64, 00:19:12.063 "state": "configuring", 00:19:12.063 "raid_level": "raid5f", 00:19:12.063 "superblock": true, 00:19:12.063 "num_base_bdevs": 4, 00:19:12.063 "num_base_bdevs_discovered": 1, 00:19:12.063 "num_base_bdevs_operational": 4, 00:19:12.063 "base_bdevs_list": [ 00:19:12.063 { 00:19:12.063 "name": "BaseBdev1", 00:19:12.063 "uuid": "6266b6a3-64e3-4d49-a493-7812e7128605", 00:19:12.063 "is_configured": true, 00:19:12.063 "data_offset": 2048, 00:19:12.063 "data_size": 63488 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "name": "BaseBdev2", 00:19:12.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.063 "is_configured": false, 00:19:12.063 "data_offset": 0, 00:19:12.063 "data_size": 0 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "name": "BaseBdev3", 00:19:12.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.063 "is_configured": false, 00:19:12.063 "data_offset": 0, 00:19:12.063 "data_size": 0 00:19:12.063 }, 00:19:12.063 { 00:19:12.063 "name": "BaseBdev4", 00:19:12.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.063 "is_configured": false, 00:19:12.063 "data_offset": 0, 00:19:12.063 "data_size": 0 00:19:12.063 } 00:19:12.063 ] 00:19:12.063 }' 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.063 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.628 [2024-11-04 14:45:11.568734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.628 [2024-11-04 14:45:11.568974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.628 [2024-11-04 14:45:11.580847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.628 [2024-11-04 14:45:11.583330] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.628 [2024-11-04 14:45:11.583390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.628 [2024-11-04 14:45:11.583408] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:12.628 [2024-11-04 14:45:11.583426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:12.628 [2024-11-04 14:45:11.583437] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:12.628 [2024-11-04 14:45:11.583450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.628 "name": "Existed_Raid", 00:19:12.628 "uuid": "d341d782-368a-4f6f-ac69-b729bb361d89", 00:19:12.628 "strip_size_kb": 64, 00:19:12.628 "state": "configuring", 00:19:12.628 "raid_level": "raid5f", 00:19:12.628 "superblock": true, 00:19:12.628 "num_base_bdevs": 4, 00:19:12.628 "num_base_bdevs_discovered": 1, 00:19:12.628 "num_base_bdevs_operational": 4, 00:19:12.628 "base_bdevs_list": [ 00:19:12.628 { 00:19:12.628 "name": "BaseBdev1", 00:19:12.628 "uuid": "6266b6a3-64e3-4d49-a493-7812e7128605", 00:19:12.628 "is_configured": true, 00:19:12.628 "data_offset": 2048, 00:19:12.628 "data_size": 63488 00:19:12.628 }, 00:19:12.628 { 00:19:12.628 "name": "BaseBdev2", 00:19:12.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.628 "is_configured": false, 00:19:12.628 "data_offset": 0, 00:19:12.628 "data_size": 0 00:19:12.628 }, 00:19:12.628 { 00:19:12.628 "name": "BaseBdev3", 00:19:12.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.628 "is_configured": false, 00:19:12.628 "data_offset": 0, 00:19:12.628 "data_size": 0 00:19:12.628 }, 00:19:12.628 { 00:19:12.628 "name": "BaseBdev4", 00:19:12.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.628 "is_configured": false, 00:19:12.628 "data_offset": 0, 00:19:12.628 "data_size": 0 00:19:12.628 } 00:19:12.628 ] 00:19:12.628 }' 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.628 14:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 [2024-11-04 14:45:12.111128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.195 BaseBdev2 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 [ 00:19:13.195 { 00:19:13.195 "name": "BaseBdev2", 00:19:13.195 "aliases": [ 00:19:13.195 "4d950852-b5bb-4e7c-b97c-96664747ef7d" 00:19:13.195 ], 00:19:13.195 "product_name": "Malloc disk", 00:19:13.195 "block_size": 512, 00:19:13.195 "num_blocks": 65536, 00:19:13.195 "uuid": "4d950852-b5bb-4e7c-b97c-96664747ef7d", 00:19:13.195 "assigned_rate_limits": { 00:19:13.195 "rw_ios_per_sec": 0, 00:19:13.195 "rw_mbytes_per_sec": 0, 00:19:13.195 "r_mbytes_per_sec": 0, 00:19:13.195 "w_mbytes_per_sec": 0 00:19:13.195 }, 00:19:13.195 "claimed": true, 00:19:13.195 "claim_type": "exclusive_write", 00:19:13.195 "zoned": false, 00:19:13.195 "supported_io_types": { 00:19:13.195 "read": true, 00:19:13.195 "write": true, 00:19:13.195 "unmap": true, 00:19:13.195 "flush": true, 00:19:13.195 "reset": true, 00:19:13.195 "nvme_admin": false, 00:19:13.195 "nvme_io": false, 00:19:13.195 "nvme_io_md": false, 00:19:13.195 "write_zeroes": true, 00:19:13.195 "zcopy": true, 00:19:13.195 "get_zone_info": false, 00:19:13.195 "zone_management": false, 00:19:13.195 "zone_append": false, 00:19:13.195 "compare": false, 00:19:13.195 "compare_and_write": false, 00:19:13.195 "abort": true, 00:19:13.195 "seek_hole": false, 00:19:13.195 "seek_data": false, 00:19:13.195 "copy": true, 00:19:13.195 "nvme_iov_md": false 00:19:13.195 }, 00:19:13.195 "memory_domains": [ 00:19:13.195 { 00:19:13.195 "dma_device_id": "system", 00:19:13.195 "dma_device_type": 1 00:19:13.195 }, 00:19:13.195 { 00:19:13.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.195 "dma_device_type": 2 00:19:13.195 } 00:19:13.195 ], 00:19:13.195 "driver_specific": {} 00:19:13.195 } 00:19:13.195 ] 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.195 "name": "Existed_Raid", 00:19:13.195 "uuid": "d341d782-368a-4f6f-ac69-b729bb361d89", 00:19:13.195 "strip_size_kb": 64, 00:19:13.195 "state": "configuring", 00:19:13.195 "raid_level": "raid5f", 00:19:13.195 "superblock": true, 00:19:13.195 "num_base_bdevs": 4, 00:19:13.195 "num_base_bdevs_discovered": 2, 00:19:13.195 "num_base_bdevs_operational": 4, 00:19:13.195 "base_bdevs_list": [ 00:19:13.195 { 00:19:13.195 "name": "BaseBdev1", 00:19:13.195 "uuid": "6266b6a3-64e3-4d49-a493-7812e7128605", 00:19:13.195 "is_configured": true, 00:19:13.195 "data_offset": 2048, 00:19:13.195 "data_size": 63488 00:19:13.195 }, 00:19:13.195 { 00:19:13.195 "name": "BaseBdev2", 00:19:13.195 "uuid": "4d950852-b5bb-4e7c-b97c-96664747ef7d", 00:19:13.195 "is_configured": true, 00:19:13.195 "data_offset": 2048, 00:19:13.195 "data_size": 63488 00:19:13.195 }, 00:19:13.195 { 00:19:13.195 "name": "BaseBdev3", 00:19:13.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.195 "is_configured": false, 00:19:13.195 "data_offset": 0, 00:19:13.195 "data_size": 0 00:19:13.195 }, 00:19:13.195 { 00:19:13.195 "name": "BaseBdev4", 00:19:13.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.195 "is_configured": false, 00:19:13.195 "data_offset": 0, 00:19:13.195 "data_size": 0 00:19:13.195 } 00:19:13.195 ] 00:19:13.195 }' 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.195 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.760 [2024-11-04 14:45:12.690078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.760 BaseBdev3 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.760 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.760 [ 00:19:13.760 { 00:19:13.760 "name": "BaseBdev3", 00:19:13.760 "aliases": [ 00:19:13.760 "32456e08-4cb3-47cb-ae99-e01b7734d9a8" 00:19:13.760 ], 00:19:13.760 "product_name": "Malloc disk", 00:19:13.760 "block_size": 512, 00:19:13.760 "num_blocks": 65536, 00:19:13.760 "uuid": "32456e08-4cb3-47cb-ae99-e01b7734d9a8", 00:19:13.760 "assigned_rate_limits": { 00:19:13.760 "rw_ios_per_sec": 0, 00:19:13.760 "rw_mbytes_per_sec": 0, 00:19:13.760 "r_mbytes_per_sec": 0, 00:19:13.760 "w_mbytes_per_sec": 0 00:19:13.760 }, 00:19:13.760 "claimed": true, 00:19:13.760 "claim_type": "exclusive_write", 00:19:13.760 "zoned": false, 00:19:13.760 "supported_io_types": { 00:19:13.760 "read": true, 00:19:13.760 "write": true, 00:19:13.760 "unmap": true, 00:19:13.760 "flush": true, 00:19:13.760 "reset": true, 00:19:13.760 "nvme_admin": false, 00:19:13.760 "nvme_io": false, 00:19:13.760 "nvme_io_md": false, 00:19:13.760 "write_zeroes": true, 00:19:13.760 "zcopy": true, 00:19:13.760 "get_zone_info": false, 00:19:13.760 "zone_management": false, 00:19:13.760 "zone_append": false, 00:19:13.760 "compare": false, 00:19:13.760 "compare_and_write": false, 00:19:13.760 "abort": true, 00:19:13.760 "seek_hole": false, 00:19:13.760 "seek_data": false, 00:19:13.760 "copy": true, 00:19:13.760 "nvme_iov_md": false 00:19:13.760 }, 00:19:13.760 "memory_domains": [ 00:19:13.760 { 00:19:13.760 "dma_device_id": "system", 00:19:13.761 "dma_device_type": 1 00:19:13.761 }, 00:19:13.761 { 00:19:13.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.761 "dma_device_type": 2 00:19:13.761 } 00:19:13.761 ], 00:19:13.761 "driver_specific": {} 00:19:13.761 } 00:19:13.761 ] 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.761 "name": "Existed_Raid", 00:19:13.761 "uuid": "d341d782-368a-4f6f-ac69-b729bb361d89", 00:19:13.761 "strip_size_kb": 64, 00:19:13.761 "state": "configuring", 00:19:13.761 "raid_level": "raid5f", 00:19:13.761 "superblock": true, 00:19:13.761 "num_base_bdevs": 4, 00:19:13.761 "num_base_bdevs_discovered": 3, 00:19:13.761 "num_base_bdevs_operational": 4, 00:19:13.761 "base_bdevs_list": [ 00:19:13.761 { 00:19:13.761 "name": "BaseBdev1", 00:19:13.761 "uuid": "6266b6a3-64e3-4d49-a493-7812e7128605", 00:19:13.761 "is_configured": true, 00:19:13.761 "data_offset": 2048, 00:19:13.761 "data_size": 63488 00:19:13.761 }, 00:19:13.761 { 00:19:13.761 "name": "BaseBdev2", 00:19:13.761 "uuid": "4d950852-b5bb-4e7c-b97c-96664747ef7d", 00:19:13.761 "is_configured": true, 00:19:13.761 "data_offset": 2048, 00:19:13.761 "data_size": 63488 00:19:13.761 }, 00:19:13.761 { 00:19:13.761 "name": "BaseBdev3", 00:19:13.761 "uuid": "32456e08-4cb3-47cb-ae99-e01b7734d9a8", 00:19:13.761 "is_configured": true, 00:19:13.761 "data_offset": 2048, 00:19:13.761 "data_size": 63488 00:19:13.761 }, 00:19:13.761 { 00:19:13.761 "name": "BaseBdev4", 00:19:13.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.761 "is_configured": false, 00:19:13.761 "data_offset": 0, 00:19:13.761 "data_size": 0 00:19:13.761 } 00:19:13.761 ] 00:19:13.761 }' 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.761 14:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.325 [2024-11-04 14:45:13.304917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:14.325 [2024-11-04 14:45:13.305310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:14.325 [2024-11-04 14:45:13.305337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:14.325 [2024-11-04 14:45:13.305676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:14.325 BaseBdev4 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.325 [2024-11-04 14:45:13.312590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:14.325 [2024-11-04 14:45:13.312624] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:14.325 [2024-11-04 14:45:13.312979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.325 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.325 [ 00:19:14.325 { 00:19:14.325 "name": "BaseBdev4", 00:19:14.325 "aliases": [ 00:19:14.325 "d1bd7210-28fe-4319-aee0-2628615dab8b" 00:19:14.326 ], 00:19:14.326 "product_name": "Malloc disk", 00:19:14.326 "block_size": 512, 00:19:14.326 "num_blocks": 65536, 00:19:14.326 "uuid": "d1bd7210-28fe-4319-aee0-2628615dab8b", 00:19:14.326 "assigned_rate_limits": { 00:19:14.326 "rw_ios_per_sec": 0, 00:19:14.326 "rw_mbytes_per_sec": 0, 00:19:14.326 "r_mbytes_per_sec": 0, 00:19:14.326 "w_mbytes_per_sec": 0 00:19:14.326 }, 00:19:14.326 "claimed": true, 00:19:14.326 "claim_type": "exclusive_write", 00:19:14.326 "zoned": false, 00:19:14.326 "supported_io_types": { 00:19:14.326 "read": true, 00:19:14.326 "write": true, 00:19:14.326 "unmap": true, 00:19:14.326 "flush": true, 00:19:14.326 "reset": true, 00:19:14.326 "nvme_admin": false, 00:19:14.326 "nvme_io": false, 00:19:14.326 "nvme_io_md": false, 00:19:14.326 "write_zeroes": true, 00:19:14.326 "zcopy": true, 00:19:14.326 "get_zone_info": false, 00:19:14.326 "zone_management": false, 00:19:14.326 "zone_append": false, 00:19:14.326 "compare": false, 00:19:14.326 "compare_and_write": false, 00:19:14.326 "abort": true, 00:19:14.326 "seek_hole": false, 00:19:14.326 "seek_data": false, 00:19:14.326 "copy": true, 00:19:14.326 "nvme_iov_md": false 00:19:14.326 }, 00:19:14.326 "memory_domains": [ 00:19:14.326 { 00:19:14.326 "dma_device_id": "system", 00:19:14.326 "dma_device_type": 1 00:19:14.326 }, 00:19:14.326 { 00:19:14.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.326 "dma_device_type": 2 00:19:14.326 } 00:19:14.326 ], 00:19:14.326 "driver_specific": {} 00:19:14.326 } 00:19:14.326 ] 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.326 "name": "Existed_Raid", 00:19:14.326 "uuid": "d341d782-368a-4f6f-ac69-b729bb361d89", 00:19:14.326 "strip_size_kb": 64, 00:19:14.326 "state": "online", 00:19:14.326 "raid_level": "raid5f", 00:19:14.326 "superblock": true, 00:19:14.326 "num_base_bdevs": 4, 00:19:14.326 "num_base_bdevs_discovered": 4, 00:19:14.326 "num_base_bdevs_operational": 4, 00:19:14.326 "base_bdevs_list": [ 00:19:14.326 { 00:19:14.326 "name": "BaseBdev1", 00:19:14.326 "uuid": "6266b6a3-64e3-4d49-a493-7812e7128605", 00:19:14.326 "is_configured": true, 00:19:14.326 "data_offset": 2048, 00:19:14.326 "data_size": 63488 00:19:14.326 }, 00:19:14.326 { 00:19:14.326 "name": "BaseBdev2", 00:19:14.326 "uuid": "4d950852-b5bb-4e7c-b97c-96664747ef7d", 00:19:14.326 "is_configured": true, 00:19:14.326 "data_offset": 2048, 00:19:14.326 "data_size": 63488 00:19:14.326 }, 00:19:14.326 { 00:19:14.326 "name": "BaseBdev3", 00:19:14.326 "uuid": "32456e08-4cb3-47cb-ae99-e01b7734d9a8", 00:19:14.326 "is_configured": true, 00:19:14.326 "data_offset": 2048, 00:19:14.326 "data_size": 63488 00:19:14.326 }, 00:19:14.326 { 00:19:14.326 "name": "BaseBdev4", 00:19:14.326 "uuid": "d1bd7210-28fe-4319-aee0-2628615dab8b", 00:19:14.326 "is_configured": true, 00:19:14.326 "data_offset": 2048, 00:19:14.326 "data_size": 63488 00:19:14.326 } 00:19:14.326 ] 00:19:14.326 }' 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.326 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.890 [2024-11-04 14:45:13.904765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.890 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:14.890 "name": "Existed_Raid", 00:19:14.890 "aliases": [ 00:19:14.890 "d341d782-368a-4f6f-ac69-b729bb361d89" 00:19:14.890 ], 00:19:14.890 "product_name": "Raid Volume", 00:19:14.890 "block_size": 512, 00:19:14.890 "num_blocks": 190464, 00:19:14.890 "uuid": "d341d782-368a-4f6f-ac69-b729bb361d89", 00:19:14.890 "assigned_rate_limits": { 00:19:14.890 "rw_ios_per_sec": 0, 00:19:14.890 "rw_mbytes_per_sec": 0, 00:19:14.890 "r_mbytes_per_sec": 0, 00:19:14.890 "w_mbytes_per_sec": 0 00:19:14.890 }, 00:19:14.890 "claimed": false, 00:19:14.890 "zoned": false, 00:19:14.891 "supported_io_types": { 00:19:14.891 "read": true, 00:19:14.891 "write": true, 00:19:14.891 "unmap": false, 00:19:14.891 "flush": false, 00:19:14.891 "reset": true, 00:19:14.891 "nvme_admin": false, 00:19:14.891 "nvme_io": false, 00:19:14.891 "nvme_io_md": false, 00:19:14.891 "write_zeroes": true, 00:19:14.891 "zcopy": false, 00:19:14.891 "get_zone_info": false, 00:19:14.891 "zone_management": false, 00:19:14.891 "zone_append": false, 00:19:14.891 "compare": false, 00:19:14.891 "compare_and_write": false, 00:19:14.891 "abort": false, 00:19:14.891 "seek_hole": false, 00:19:14.891 "seek_data": false, 00:19:14.891 "copy": false, 00:19:14.891 "nvme_iov_md": false 00:19:14.891 }, 00:19:14.891 "driver_specific": { 00:19:14.891 "raid": { 00:19:14.891 "uuid": "d341d782-368a-4f6f-ac69-b729bb361d89", 00:19:14.891 "strip_size_kb": 64, 00:19:14.891 "state": "online", 00:19:14.891 "raid_level": "raid5f", 00:19:14.891 "superblock": true, 00:19:14.891 "num_base_bdevs": 4, 00:19:14.891 "num_base_bdevs_discovered": 4, 00:19:14.891 "num_base_bdevs_operational": 4, 00:19:14.891 "base_bdevs_list": [ 00:19:14.891 { 00:19:14.891 "name": "BaseBdev1", 00:19:14.891 "uuid": "6266b6a3-64e3-4d49-a493-7812e7128605", 00:19:14.891 "is_configured": true, 00:19:14.891 "data_offset": 2048, 00:19:14.891 "data_size": 63488 00:19:14.891 }, 00:19:14.891 { 00:19:14.891 "name": "BaseBdev2", 00:19:14.891 "uuid": "4d950852-b5bb-4e7c-b97c-96664747ef7d", 00:19:14.891 "is_configured": true, 00:19:14.891 "data_offset": 2048, 00:19:14.891 "data_size": 63488 00:19:14.891 }, 00:19:14.891 { 00:19:14.891 "name": "BaseBdev3", 00:19:14.891 "uuid": "32456e08-4cb3-47cb-ae99-e01b7734d9a8", 00:19:14.891 "is_configured": true, 00:19:14.891 "data_offset": 2048, 00:19:14.891 "data_size": 63488 00:19:14.891 }, 00:19:14.891 { 00:19:14.891 "name": "BaseBdev4", 00:19:14.891 "uuid": "d1bd7210-28fe-4319-aee0-2628615dab8b", 00:19:14.891 "is_configured": true, 00:19:14.891 "data_offset": 2048, 00:19:14.891 "data_size": 63488 00:19:14.891 } 00:19:14.891 ] 00:19:14.891 } 00:19:14.891 } 00:19:14.891 }' 00:19:14.891 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:14.891 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:14.891 BaseBdev2 00:19:14.891 BaseBdev3 00:19:14.891 BaseBdev4' 00:19:14.891 14:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.148 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.148 [2024-11-04 14:45:14.260672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.405 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.405 "name": "Existed_Raid", 00:19:15.405 "uuid": "d341d782-368a-4f6f-ac69-b729bb361d89", 00:19:15.405 "strip_size_kb": 64, 00:19:15.405 "state": "online", 00:19:15.405 "raid_level": "raid5f", 00:19:15.405 "superblock": true, 00:19:15.405 "num_base_bdevs": 4, 00:19:15.405 "num_base_bdevs_discovered": 3, 00:19:15.406 "num_base_bdevs_operational": 3, 00:19:15.406 "base_bdevs_list": [ 00:19:15.406 { 00:19:15.406 "name": null, 00:19:15.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.406 "is_configured": false, 00:19:15.406 "data_offset": 0, 00:19:15.406 "data_size": 63488 00:19:15.406 }, 00:19:15.406 { 00:19:15.406 "name": "BaseBdev2", 00:19:15.406 "uuid": "4d950852-b5bb-4e7c-b97c-96664747ef7d", 00:19:15.406 "is_configured": true, 00:19:15.406 "data_offset": 2048, 00:19:15.406 "data_size": 63488 00:19:15.406 }, 00:19:15.406 { 00:19:15.406 "name": "BaseBdev3", 00:19:15.406 "uuid": "32456e08-4cb3-47cb-ae99-e01b7734d9a8", 00:19:15.406 "is_configured": true, 00:19:15.406 "data_offset": 2048, 00:19:15.406 "data_size": 63488 00:19:15.406 }, 00:19:15.406 { 00:19:15.406 "name": "BaseBdev4", 00:19:15.406 "uuid": "d1bd7210-28fe-4319-aee0-2628615dab8b", 00:19:15.406 "is_configured": true, 00:19:15.406 "data_offset": 2048, 00:19:15.406 "data_size": 63488 00:19:15.406 } 00:19:15.406 ] 00:19:15.406 }' 00:19:15.406 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.406 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.999 14:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 [2024-11-04 14:45:14.917822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:15.999 [2024-11-04 14:45:14.918069] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.999 [2024-11-04 14:45:15.004160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:15.999 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:16.000 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.000 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.000 [2024-11-04 14:45:15.080230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.259 [2024-11-04 14:45:15.226619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:16.259 [2024-11-04 14:45:15.226688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.259 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.519 BaseBdev2 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.519 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.519 [ 00:19:16.519 { 00:19:16.519 "name": "BaseBdev2", 00:19:16.519 "aliases": [ 00:19:16.519 "b7ff8df4-5088-4e9c-bdd7-d91678782992" 00:19:16.519 ], 00:19:16.519 "product_name": "Malloc disk", 00:19:16.519 "block_size": 512, 00:19:16.519 "num_blocks": 65536, 00:19:16.519 "uuid": "b7ff8df4-5088-4e9c-bdd7-d91678782992", 00:19:16.519 "assigned_rate_limits": { 00:19:16.519 "rw_ios_per_sec": 0, 00:19:16.520 "rw_mbytes_per_sec": 0, 00:19:16.520 "r_mbytes_per_sec": 0, 00:19:16.520 "w_mbytes_per_sec": 0 00:19:16.520 }, 00:19:16.520 "claimed": false, 00:19:16.520 "zoned": false, 00:19:16.520 "supported_io_types": { 00:19:16.520 "read": true, 00:19:16.520 "write": true, 00:19:16.520 "unmap": true, 00:19:16.520 "flush": true, 00:19:16.520 "reset": true, 00:19:16.520 "nvme_admin": false, 00:19:16.520 "nvme_io": false, 00:19:16.520 "nvme_io_md": false, 00:19:16.520 "write_zeroes": true, 00:19:16.520 "zcopy": true, 00:19:16.520 "get_zone_info": false, 00:19:16.520 "zone_management": false, 00:19:16.520 "zone_append": false, 00:19:16.520 "compare": false, 00:19:16.520 "compare_and_write": false, 00:19:16.520 "abort": true, 00:19:16.520 "seek_hole": false, 00:19:16.520 "seek_data": false, 00:19:16.520 "copy": true, 00:19:16.520 "nvme_iov_md": false 00:19:16.520 }, 00:19:16.520 "memory_domains": [ 00:19:16.520 { 00:19:16.520 "dma_device_id": "system", 00:19:16.520 "dma_device_type": 1 00:19:16.520 }, 00:19:16.520 { 00:19:16.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.520 "dma_device_type": 2 00:19:16.520 } 00:19:16.520 ], 00:19:16.520 "driver_specific": {} 00:19:16.520 } 00:19:16.520 ] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 BaseBdev3 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 [ 00:19:16.520 { 00:19:16.520 "name": "BaseBdev3", 00:19:16.520 "aliases": [ 00:19:16.520 "afc15596-5455-4df1-8851-3b929e0bdd3e" 00:19:16.520 ], 00:19:16.520 "product_name": "Malloc disk", 00:19:16.520 "block_size": 512, 00:19:16.520 "num_blocks": 65536, 00:19:16.520 "uuid": "afc15596-5455-4df1-8851-3b929e0bdd3e", 00:19:16.520 "assigned_rate_limits": { 00:19:16.520 "rw_ios_per_sec": 0, 00:19:16.520 "rw_mbytes_per_sec": 0, 00:19:16.520 "r_mbytes_per_sec": 0, 00:19:16.520 "w_mbytes_per_sec": 0 00:19:16.520 }, 00:19:16.520 "claimed": false, 00:19:16.520 "zoned": false, 00:19:16.520 "supported_io_types": { 00:19:16.520 "read": true, 00:19:16.520 "write": true, 00:19:16.520 "unmap": true, 00:19:16.520 "flush": true, 00:19:16.520 "reset": true, 00:19:16.520 "nvme_admin": false, 00:19:16.520 "nvme_io": false, 00:19:16.520 "nvme_io_md": false, 00:19:16.520 "write_zeroes": true, 00:19:16.520 "zcopy": true, 00:19:16.520 "get_zone_info": false, 00:19:16.520 "zone_management": false, 00:19:16.520 "zone_append": false, 00:19:16.520 "compare": false, 00:19:16.520 "compare_and_write": false, 00:19:16.520 "abort": true, 00:19:16.520 "seek_hole": false, 00:19:16.520 "seek_data": false, 00:19:16.520 "copy": true, 00:19:16.520 "nvme_iov_md": false 00:19:16.520 }, 00:19:16.520 "memory_domains": [ 00:19:16.520 { 00:19:16.520 "dma_device_id": "system", 00:19:16.520 "dma_device_type": 1 00:19:16.520 }, 00:19:16.520 { 00:19:16.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.520 "dma_device_type": 2 00:19:16.520 } 00:19:16.520 ], 00:19:16.520 "driver_specific": {} 00:19:16.520 } 00:19:16.520 ] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 BaseBdev4 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.520 [ 00:19:16.520 { 00:19:16.520 "name": "BaseBdev4", 00:19:16.520 "aliases": [ 00:19:16.520 "d10504f7-8762-4313-9a92-b503e162c1d2" 00:19:16.520 ], 00:19:16.520 "product_name": "Malloc disk", 00:19:16.520 "block_size": 512, 00:19:16.520 "num_blocks": 65536, 00:19:16.520 "uuid": "d10504f7-8762-4313-9a92-b503e162c1d2", 00:19:16.520 "assigned_rate_limits": { 00:19:16.520 "rw_ios_per_sec": 0, 00:19:16.520 "rw_mbytes_per_sec": 0, 00:19:16.520 "r_mbytes_per_sec": 0, 00:19:16.520 "w_mbytes_per_sec": 0 00:19:16.520 }, 00:19:16.520 "claimed": false, 00:19:16.520 "zoned": false, 00:19:16.520 "supported_io_types": { 00:19:16.520 "read": true, 00:19:16.520 "write": true, 00:19:16.520 "unmap": true, 00:19:16.520 "flush": true, 00:19:16.520 "reset": true, 00:19:16.520 "nvme_admin": false, 00:19:16.520 "nvme_io": false, 00:19:16.520 "nvme_io_md": false, 00:19:16.520 "write_zeroes": true, 00:19:16.520 "zcopy": true, 00:19:16.520 "get_zone_info": false, 00:19:16.520 "zone_management": false, 00:19:16.520 "zone_append": false, 00:19:16.520 "compare": false, 00:19:16.520 "compare_and_write": false, 00:19:16.520 "abort": true, 00:19:16.520 "seek_hole": false, 00:19:16.520 "seek_data": false, 00:19:16.520 "copy": true, 00:19:16.520 "nvme_iov_md": false 00:19:16.520 }, 00:19:16.520 "memory_domains": [ 00:19:16.520 { 00:19:16.520 "dma_device_id": "system", 00:19:16.520 "dma_device_type": 1 00:19:16.520 }, 00:19:16.520 { 00:19:16.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.520 "dma_device_type": 2 00:19:16.520 } 00:19:16.520 ], 00:19:16.520 "driver_specific": {} 00:19:16.520 } 00:19:16.520 ] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.520 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.521 [2024-11-04 14:45:15.601258] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:16.521 [2024-11-04 14:45:15.601561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:16.521 [2024-11-04 14:45:15.601633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:16.521 [2024-11-04 14:45:15.604788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:16.521 [2024-11-04 14:45:15.605080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.521 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.780 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:16.780 "name": "Existed_Raid", 00:19:16.780 "uuid": "31276e03-1f04-4a17-9f1f-67722848939b", 00:19:16.780 "strip_size_kb": 64, 00:19:16.780 "state": "configuring", 00:19:16.780 "raid_level": "raid5f", 00:19:16.780 "superblock": true, 00:19:16.780 "num_base_bdevs": 4, 00:19:16.780 "num_base_bdevs_discovered": 3, 00:19:16.780 "num_base_bdevs_operational": 4, 00:19:16.780 "base_bdevs_list": [ 00:19:16.780 { 00:19:16.780 "name": "BaseBdev1", 00:19:16.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.780 "is_configured": false, 00:19:16.780 "data_offset": 0, 00:19:16.780 "data_size": 0 00:19:16.780 }, 00:19:16.780 { 00:19:16.780 "name": "BaseBdev2", 00:19:16.780 "uuid": "b7ff8df4-5088-4e9c-bdd7-d91678782992", 00:19:16.780 "is_configured": true, 00:19:16.780 "data_offset": 2048, 00:19:16.780 "data_size": 63488 00:19:16.780 }, 00:19:16.780 { 00:19:16.780 "name": "BaseBdev3", 00:19:16.780 "uuid": "afc15596-5455-4df1-8851-3b929e0bdd3e", 00:19:16.780 "is_configured": true, 00:19:16.780 "data_offset": 2048, 00:19:16.780 "data_size": 63488 00:19:16.780 }, 00:19:16.780 { 00:19:16.780 "name": "BaseBdev4", 00:19:16.780 "uuid": "d10504f7-8762-4313-9a92-b503e162c1d2", 00:19:16.780 "is_configured": true, 00:19:16.780 "data_offset": 2048, 00:19:16.780 "data_size": 63488 00:19:16.780 } 00:19:16.780 ] 00:19:16.780 }' 00:19:16.780 14:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:16.780 14:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.040 [2024-11-04 14:45:16.121497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.040 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.299 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.299 "name": "Existed_Raid", 00:19:17.299 "uuid": "31276e03-1f04-4a17-9f1f-67722848939b", 00:19:17.299 "strip_size_kb": 64, 00:19:17.299 "state": "configuring", 00:19:17.299 "raid_level": "raid5f", 00:19:17.299 "superblock": true, 00:19:17.299 "num_base_bdevs": 4, 00:19:17.299 "num_base_bdevs_discovered": 2, 00:19:17.299 "num_base_bdevs_operational": 4, 00:19:17.299 "base_bdevs_list": [ 00:19:17.299 { 00:19:17.299 "name": "BaseBdev1", 00:19:17.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.299 "is_configured": false, 00:19:17.299 "data_offset": 0, 00:19:17.299 "data_size": 0 00:19:17.299 }, 00:19:17.299 { 00:19:17.299 "name": null, 00:19:17.299 "uuid": "b7ff8df4-5088-4e9c-bdd7-d91678782992", 00:19:17.299 "is_configured": false, 00:19:17.299 "data_offset": 0, 00:19:17.299 "data_size": 63488 00:19:17.299 }, 00:19:17.299 { 00:19:17.299 "name": "BaseBdev3", 00:19:17.299 "uuid": "afc15596-5455-4df1-8851-3b929e0bdd3e", 00:19:17.299 "is_configured": true, 00:19:17.299 "data_offset": 2048, 00:19:17.299 "data_size": 63488 00:19:17.299 }, 00:19:17.299 { 00:19:17.299 "name": "BaseBdev4", 00:19:17.299 "uuid": "d10504f7-8762-4313-9a92-b503e162c1d2", 00:19:17.299 "is_configured": true, 00:19:17.299 "data_offset": 2048, 00:19:17.299 "data_size": 63488 00:19:17.299 } 00:19:17.299 ] 00:19:17.299 }' 00:19:17.299 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.299 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.556 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.556 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.556 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:17.556 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.556 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.815 [2024-11-04 14:45:16.723371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:17.815 BaseBdev1 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.815 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.815 [ 00:19:17.815 { 00:19:17.815 "name": "BaseBdev1", 00:19:17.815 "aliases": [ 00:19:17.815 "2366b53d-0740-40f5-b3b3-70a2d1ef9668" 00:19:17.815 ], 00:19:17.815 "product_name": "Malloc disk", 00:19:17.815 "block_size": 512, 00:19:17.815 "num_blocks": 65536, 00:19:17.815 "uuid": "2366b53d-0740-40f5-b3b3-70a2d1ef9668", 00:19:17.815 "assigned_rate_limits": { 00:19:17.815 "rw_ios_per_sec": 0, 00:19:17.815 "rw_mbytes_per_sec": 0, 00:19:17.815 "r_mbytes_per_sec": 0, 00:19:17.815 "w_mbytes_per_sec": 0 00:19:17.815 }, 00:19:17.815 "claimed": true, 00:19:17.815 "claim_type": "exclusive_write", 00:19:17.815 "zoned": false, 00:19:17.815 "supported_io_types": { 00:19:17.815 "read": true, 00:19:17.815 "write": true, 00:19:17.815 "unmap": true, 00:19:17.815 "flush": true, 00:19:17.815 "reset": true, 00:19:17.815 "nvme_admin": false, 00:19:17.815 "nvme_io": false, 00:19:17.815 "nvme_io_md": false, 00:19:17.815 "write_zeroes": true, 00:19:17.815 "zcopy": true, 00:19:17.816 "get_zone_info": false, 00:19:17.816 "zone_management": false, 00:19:17.816 "zone_append": false, 00:19:17.816 "compare": false, 00:19:17.816 "compare_and_write": false, 00:19:17.816 "abort": true, 00:19:17.816 "seek_hole": false, 00:19:17.816 "seek_data": false, 00:19:17.816 "copy": true, 00:19:17.816 "nvme_iov_md": false 00:19:17.816 }, 00:19:17.816 "memory_domains": [ 00:19:17.816 { 00:19:17.816 "dma_device_id": "system", 00:19:17.816 "dma_device_type": 1 00:19:17.816 }, 00:19:17.816 { 00:19:17.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.816 "dma_device_type": 2 00:19:17.816 } 00:19:17.816 ], 00:19:17.816 "driver_specific": {} 00:19:17.816 } 00:19:17.816 ] 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.816 "name": "Existed_Raid", 00:19:17.816 "uuid": "31276e03-1f04-4a17-9f1f-67722848939b", 00:19:17.816 "strip_size_kb": 64, 00:19:17.816 "state": "configuring", 00:19:17.816 "raid_level": "raid5f", 00:19:17.816 "superblock": true, 00:19:17.816 "num_base_bdevs": 4, 00:19:17.816 "num_base_bdevs_discovered": 3, 00:19:17.816 "num_base_bdevs_operational": 4, 00:19:17.816 "base_bdevs_list": [ 00:19:17.816 { 00:19:17.816 "name": "BaseBdev1", 00:19:17.816 "uuid": "2366b53d-0740-40f5-b3b3-70a2d1ef9668", 00:19:17.816 "is_configured": true, 00:19:17.816 "data_offset": 2048, 00:19:17.816 "data_size": 63488 00:19:17.816 }, 00:19:17.816 { 00:19:17.816 "name": null, 00:19:17.816 "uuid": "b7ff8df4-5088-4e9c-bdd7-d91678782992", 00:19:17.816 "is_configured": false, 00:19:17.816 "data_offset": 0, 00:19:17.816 "data_size": 63488 00:19:17.816 }, 00:19:17.816 { 00:19:17.816 "name": "BaseBdev3", 00:19:17.816 "uuid": "afc15596-5455-4df1-8851-3b929e0bdd3e", 00:19:17.816 "is_configured": true, 00:19:17.816 "data_offset": 2048, 00:19:17.816 "data_size": 63488 00:19:17.816 }, 00:19:17.816 { 00:19:17.816 "name": "BaseBdev4", 00:19:17.816 "uuid": "d10504f7-8762-4313-9a92-b503e162c1d2", 00:19:17.816 "is_configured": true, 00:19:17.816 "data_offset": 2048, 00:19:17.816 "data_size": 63488 00:19:17.816 } 00:19:17.816 ] 00:19:17.816 }' 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.816 14:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.387 [2024-11-04 14:45:17.287625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.387 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.388 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.388 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.388 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.388 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.388 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.388 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.388 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.388 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.388 "name": "Existed_Raid", 00:19:18.388 "uuid": "31276e03-1f04-4a17-9f1f-67722848939b", 00:19:18.388 "strip_size_kb": 64, 00:19:18.388 "state": "configuring", 00:19:18.388 "raid_level": "raid5f", 00:19:18.388 "superblock": true, 00:19:18.388 "num_base_bdevs": 4, 00:19:18.388 "num_base_bdevs_discovered": 2, 00:19:18.388 "num_base_bdevs_operational": 4, 00:19:18.388 "base_bdevs_list": [ 00:19:18.388 { 00:19:18.388 "name": "BaseBdev1", 00:19:18.388 "uuid": "2366b53d-0740-40f5-b3b3-70a2d1ef9668", 00:19:18.388 "is_configured": true, 00:19:18.388 "data_offset": 2048, 00:19:18.388 "data_size": 63488 00:19:18.388 }, 00:19:18.388 { 00:19:18.388 "name": null, 00:19:18.388 "uuid": "b7ff8df4-5088-4e9c-bdd7-d91678782992", 00:19:18.388 "is_configured": false, 00:19:18.388 "data_offset": 0, 00:19:18.388 "data_size": 63488 00:19:18.388 }, 00:19:18.388 { 00:19:18.388 "name": null, 00:19:18.388 "uuid": "afc15596-5455-4df1-8851-3b929e0bdd3e", 00:19:18.388 "is_configured": false, 00:19:18.388 "data_offset": 0, 00:19:18.388 "data_size": 63488 00:19:18.388 }, 00:19:18.388 { 00:19:18.388 "name": "BaseBdev4", 00:19:18.388 "uuid": "d10504f7-8762-4313-9a92-b503e162c1d2", 00:19:18.388 "is_configured": true, 00:19:18.388 "data_offset": 2048, 00:19:18.388 "data_size": 63488 00:19:18.388 } 00:19:18.388 ] 00:19:18.388 }' 00:19:18.388 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.388 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.961 [2024-11-04 14:45:17.843789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.961 "name": "Existed_Raid", 00:19:18.961 "uuid": "31276e03-1f04-4a17-9f1f-67722848939b", 00:19:18.961 "strip_size_kb": 64, 00:19:18.961 "state": "configuring", 00:19:18.961 "raid_level": "raid5f", 00:19:18.961 "superblock": true, 00:19:18.961 "num_base_bdevs": 4, 00:19:18.961 "num_base_bdevs_discovered": 3, 00:19:18.961 "num_base_bdevs_operational": 4, 00:19:18.961 "base_bdevs_list": [ 00:19:18.961 { 00:19:18.961 "name": "BaseBdev1", 00:19:18.961 "uuid": "2366b53d-0740-40f5-b3b3-70a2d1ef9668", 00:19:18.961 "is_configured": true, 00:19:18.961 "data_offset": 2048, 00:19:18.961 "data_size": 63488 00:19:18.961 }, 00:19:18.961 { 00:19:18.961 "name": null, 00:19:18.961 "uuid": "b7ff8df4-5088-4e9c-bdd7-d91678782992", 00:19:18.961 "is_configured": false, 00:19:18.961 "data_offset": 0, 00:19:18.961 "data_size": 63488 00:19:18.961 }, 00:19:18.961 { 00:19:18.961 "name": "BaseBdev3", 00:19:18.961 "uuid": "afc15596-5455-4df1-8851-3b929e0bdd3e", 00:19:18.961 "is_configured": true, 00:19:18.961 "data_offset": 2048, 00:19:18.961 "data_size": 63488 00:19:18.961 }, 00:19:18.961 { 00:19:18.961 "name": "BaseBdev4", 00:19:18.961 "uuid": "d10504f7-8762-4313-9a92-b503e162c1d2", 00:19:18.961 "is_configured": true, 00:19:18.961 "data_offset": 2048, 00:19:18.961 "data_size": 63488 00:19:18.961 } 00:19:18.961 ] 00:19:18.961 }' 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.961 14:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.525 [2024-11-04 14:45:18.419965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.525 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.526 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.526 14:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.526 14:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.526 14:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.526 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.526 "name": "Existed_Raid", 00:19:19.526 "uuid": "31276e03-1f04-4a17-9f1f-67722848939b", 00:19:19.526 "strip_size_kb": 64, 00:19:19.526 "state": "configuring", 00:19:19.526 "raid_level": "raid5f", 00:19:19.526 "superblock": true, 00:19:19.526 "num_base_bdevs": 4, 00:19:19.526 "num_base_bdevs_discovered": 2, 00:19:19.526 "num_base_bdevs_operational": 4, 00:19:19.526 "base_bdevs_list": [ 00:19:19.526 { 00:19:19.526 "name": null, 00:19:19.526 "uuid": "2366b53d-0740-40f5-b3b3-70a2d1ef9668", 00:19:19.526 "is_configured": false, 00:19:19.526 "data_offset": 0, 00:19:19.526 "data_size": 63488 00:19:19.526 }, 00:19:19.526 { 00:19:19.526 "name": null, 00:19:19.526 "uuid": "b7ff8df4-5088-4e9c-bdd7-d91678782992", 00:19:19.526 "is_configured": false, 00:19:19.526 "data_offset": 0, 00:19:19.526 "data_size": 63488 00:19:19.526 }, 00:19:19.526 { 00:19:19.526 "name": "BaseBdev3", 00:19:19.526 "uuid": "afc15596-5455-4df1-8851-3b929e0bdd3e", 00:19:19.526 "is_configured": true, 00:19:19.526 "data_offset": 2048, 00:19:19.526 "data_size": 63488 00:19:19.526 }, 00:19:19.526 { 00:19:19.526 "name": "BaseBdev4", 00:19:19.526 "uuid": "d10504f7-8762-4313-9a92-b503e162c1d2", 00:19:19.526 "is_configured": true, 00:19:19.526 "data_offset": 2048, 00:19:19.526 "data_size": 63488 00:19:19.526 } 00:19:19.526 ] 00:19:19.526 }' 00:19:19.526 14:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.526 14:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.092 [2024-11-04 14:45:19.106406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.092 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.093 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.093 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.093 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.093 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.093 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.093 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.093 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.093 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.093 "name": "Existed_Raid", 00:19:20.093 "uuid": "31276e03-1f04-4a17-9f1f-67722848939b", 00:19:20.093 "strip_size_kb": 64, 00:19:20.093 "state": "configuring", 00:19:20.093 "raid_level": "raid5f", 00:19:20.093 "superblock": true, 00:19:20.093 "num_base_bdevs": 4, 00:19:20.093 "num_base_bdevs_discovered": 3, 00:19:20.093 "num_base_bdevs_operational": 4, 00:19:20.093 "base_bdevs_list": [ 00:19:20.093 { 00:19:20.093 "name": null, 00:19:20.093 "uuid": "2366b53d-0740-40f5-b3b3-70a2d1ef9668", 00:19:20.093 "is_configured": false, 00:19:20.093 "data_offset": 0, 00:19:20.093 "data_size": 63488 00:19:20.093 }, 00:19:20.093 { 00:19:20.093 "name": "BaseBdev2", 00:19:20.093 "uuid": "b7ff8df4-5088-4e9c-bdd7-d91678782992", 00:19:20.093 "is_configured": true, 00:19:20.093 "data_offset": 2048, 00:19:20.093 "data_size": 63488 00:19:20.093 }, 00:19:20.093 { 00:19:20.093 "name": "BaseBdev3", 00:19:20.093 "uuid": "afc15596-5455-4df1-8851-3b929e0bdd3e", 00:19:20.093 "is_configured": true, 00:19:20.093 "data_offset": 2048, 00:19:20.093 "data_size": 63488 00:19:20.093 }, 00:19:20.093 { 00:19:20.093 "name": "BaseBdev4", 00:19:20.093 "uuid": "d10504f7-8762-4313-9a92-b503e162c1d2", 00:19:20.093 "is_configured": true, 00:19:20.093 "data_offset": 2048, 00:19:20.093 "data_size": 63488 00:19:20.093 } 00:19:20.093 ] 00:19:20.093 }' 00:19:20.093 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.093 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2366b53d-0740-40f5-b3b3-70a2d1ef9668 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.659 [2024-11-04 14:45:19.748776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:20.659 [2024-11-04 14:45:19.749136] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:20.659 [2024-11-04 14:45:19.749155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:20.659 [2024-11-04 14:45:19.749477] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:19:20.659 NewBaseBdev 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.659 [2024-11-04 14:45:19.756159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:20.659 [2024-11-04 14:45:19.756192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:20.659 [2024-11-04 14:45:19.756520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.659 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.659 [ 00:19:20.659 { 00:19:20.659 "name": "NewBaseBdev", 00:19:20.659 "aliases": [ 00:19:20.659 "2366b53d-0740-40f5-b3b3-70a2d1ef9668" 00:19:20.659 ], 00:19:20.659 "product_name": "Malloc disk", 00:19:20.659 "block_size": 512, 00:19:20.659 "num_blocks": 65536, 00:19:20.659 "uuid": "2366b53d-0740-40f5-b3b3-70a2d1ef9668", 00:19:20.659 "assigned_rate_limits": { 00:19:20.659 "rw_ios_per_sec": 0, 00:19:20.659 "rw_mbytes_per_sec": 0, 00:19:20.659 "r_mbytes_per_sec": 0, 00:19:20.659 "w_mbytes_per_sec": 0 00:19:20.659 }, 00:19:20.659 "claimed": true, 00:19:20.659 "claim_type": "exclusive_write", 00:19:20.659 "zoned": false, 00:19:20.659 "supported_io_types": { 00:19:20.659 "read": true, 00:19:20.659 "write": true, 00:19:20.917 "unmap": true, 00:19:20.917 "flush": true, 00:19:20.917 "reset": true, 00:19:20.917 "nvme_admin": false, 00:19:20.917 "nvme_io": false, 00:19:20.917 "nvme_io_md": false, 00:19:20.917 "write_zeroes": true, 00:19:20.917 "zcopy": true, 00:19:20.917 "get_zone_info": false, 00:19:20.917 "zone_management": false, 00:19:20.917 "zone_append": false, 00:19:20.917 "compare": false, 00:19:20.917 "compare_and_write": false, 00:19:20.917 "abort": true, 00:19:20.917 "seek_hole": false, 00:19:20.917 "seek_data": false, 00:19:20.917 "copy": true, 00:19:20.917 "nvme_iov_md": false 00:19:20.917 }, 00:19:20.917 "memory_domains": [ 00:19:20.917 { 00:19:20.917 "dma_device_id": "system", 00:19:20.917 "dma_device_type": 1 00:19:20.917 }, 00:19:20.917 { 00:19:20.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.917 "dma_device_type": 2 00:19:20.917 } 00:19:20.917 ], 00:19:20.917 "driver_specific": {} 00:19:20.917 } 00:19:20.917 ] 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.917 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.918 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.918 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:20.918 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:20.918 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.918 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.918 "name": "Existed_Raid", 00:19:20.918 "uuid": "31276e03-1f04-4a17-9f1f-67722848939b", 00:19:20.918 "strip_size_kb": 64, 00:19:20.918 "state": "online", 00:19:20.918 "raid_level": "raid5f", 00:19:20.918 "superblock": true, 00:19:20.918 "num_base_bdevs": 4, 00:19:20.918 "num_base_bdevs_discovered": 4, 00:19:20.918 "num_base_bdevs_operational": 4, 00:19:20.918 "base_bdevs_list": [ 00:19:20.918 { 00:19:20.918 "name": "NewBaseBdev", 00:19:20.918 "uuid": "2366b53d-0740-40f5-b3b3-70a2d1ef9668", 00:19:20.918 "is_configured": true, 00:19:20.918 "data_offset": 2048, 00:19:20.918 "data_size": 63488 00:19:20.918 }, 00:19:20.918 { 00:19:20.918 "name": "BaseBdev2", 00:19:20.918 "uuid": "b7ff8df4-5088-4e9c-bdd7-d91678782992", 00:19:20.918 "is_configured": true, 00:19:20.918 "data_offset": 2048, 00:19:20.918 "data_size": 63488 00:19:20.918 }, 00:19:20.918 { 00:19:20.918 "name": "BaseBdev3", 00:19:20.918 "uuid": "afc15596-5455-4df1-8851-3b929e0bdd3e", 00:19:20.918 "is_configured": true, 00:19:20.918 "data_offset": 2048, 00:19:20.918 "data_size": 63488 00:19:20.918 }, 00:19:20.918 { 00:19:20.918 "name": "BaseBdev4", 00:19:20.918 "uuid": "d10504f7-8762-4313-9a92-b503e162c1d2", 00:19:20.918 "is_configured": true, 00:19:20.918 "data_offset": 2048, 00:19:20.918 "data_size": 63488 00:19:20.918 } 00:19:20.918 ] 00:19:20.918 }' 00:19:20.918 14:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.918 14:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.176 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:21.176 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:21.176 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:21.176 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:21.176 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:21.176 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:21.176 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:21.176 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.176 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.176 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:21.434 [2024-11-04 14:45:20.300358] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.434 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.434 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:21.434 "name": "Existed_Raid", 00:19:21.434 "aliases": [ 00:19:21.434 "31276e03-1f04-4a17-9f1f-67722848939b" 00:19:21.434 ], 00:19:21.434 "product_name": "Raid Volume", 00:19:21.434 "block_size": 512, 00:19:21.434 "num_blocks": 190464, 00:19:21.434 "uuid": "31276e03-1f04-4a17-9f1f-67722848939b", 00:19:21.434 "assigned_rate_limits": { 00:19:21.434 "rw_ios_per_sec": 0, 00:19:21.434 "rw_mbytes_per_sec": 0, 00:19:21.434 "r_mbytes_per_sec": 0, 00:19:21.434 "w_mbytes_per_sec": 0 00:19:21.434 }, 00:19:21.434 "claimed": false, 00:19:21.434 "zoned": false, 00:19:21.434 "supported_io_types": { 00:19:21.434 "read": true, 00:19:21.434 "write": true, 00:19:21.434 "unmap": false, 00:19:21.434 "flush": false, 00:19:21.434 "reset": true, 00:19:21.434 "nvme_admin": false, 00:19:21.434 "nvme_io": false, 00:19:21.434 "nvme_io_md": false, 00:19:21.434 "write_zeroes": true, 00:19:21.434 "zcopy": false, 00:19:21.434 "get_zone_info": false, 00:19:21.434 "zone_management": false, 00:19:21.434 "zone_append": false, 00:19:21.434 "compare": false, 00:19:21.434 "compare_and_write": false, 00:19:21.434 "abort": false, 00:19:21.434 "seek_hole": false, 00:19:21.434 "seek_data": false, 00:19:21.434 "copy": false, 00:19:21.434 "nvme_iov_md": false 00:19:21.434 }, 00:19:21.434 "driver_specific": { 00:19:21.434 "raid": { 00:19:21.434 "uuid": "31276e03-1f04-4a17-9f1f-67722848939b", 00:19:21.434 "strip_size_kb": 64, 00:19:21.434 "state": "online", 00:19:21.434 "raid_level": "raid5f", 00:19:21.434 "superblock": true, 00:19:21.434 "num_base_bdevs": 4, 00:19:21.434 "num_base_bdevs_discovered": 4, 00:19:21.434 "num_base_bdevs_operational": 4, 00:19:21.434 "base_bdevs_list": [ 00:19:21.434 { 00:19:21.434 "name": "NewBaseBdev", 00:19:21.434 "uuid": "2366b53d-0740-40f5-b3b3-70a2d1ef9668", 00:19:21.434 "is_configured": true, 00:19:21.434 "data_offset": 2048, 00:19:21.434 "data_size": 63488 00:19:21.434 }, 00:19:21.434 { 00:19:21.434 "name": "BaseBdev2", 00:19:21.434 "uuid": "b7ff8df4-5088-4e9c-bdd7-d91678782992", 00:19:21.434 "is_configured": true, 00:19:21.434 "data_offset": 2048, 00:19:21.434 "data_size": 63488 00:19:21.434 }, 00:19:21.434 { 00:19:21.434 "name": "BaseBdev3", 00:19:21.434 "uuid": "afc15596-5455-4df1-8851-3b929e0bdd3e", 00:19:21.434 "is_configured": true, 00:19:21.434 "data_offset": 2048, 00:19:21.434 "data_size": 63488 00:19:21.434 }, 00:19:21.434 { 00:19:21.434 "name": "BaseBdev4", 00:19:21.434 "uuid": "d10504f7-8762-4313-9a92-b503e162c1d2", 00:19:21.434 "is_configured": true, 00:19:21.434 "data_offset": 2048, 00:19:21.434 "data_size": 63488 00:19:21.434 } 00:19:21.434 ] 00:19:21.434 } 00:19:21.434 } 00:19:21.434 }' 00:19:21.434 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.434 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:21.434 BaseBdev2 00:19:21.434 BaseBdev3 00:19:21.434 BaseBdev4' 00:19:21.434 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.434 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:21.434 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.434 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:21.434 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.434 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.435 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.435 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.435 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.435 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.435 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.435 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.435 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:21.435 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.435 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.435 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.693 [2024-11-04 14:45:20.676146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:21.693 [2024-11-04 14:45:20.676185] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.693 [2024-11-04 14:45:20.676275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.693 [2024-11-04 14:45:20.676649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.693 [2024-11-04 14:45:20.676666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83842 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83842 ']' 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83842 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83842 00:19:21.693 killing process with pid 83842 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83842' 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83842 00:19:21.693 [2024-11-04 14:45:20.714754] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.693 14:45:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83842 00:19:21.952 [2024-11-04 14:45:21.067913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:23.327 14:45:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:23.327 00:19:23.327 real 0m12.816s 00:19:23.327 user 0m21.248s 00:19:23.327 sys 0m1.767s 00:19:23.327 ************************************ 00:19:23.327 END TEST raid5f_state_function_test_sb 00:19:23.327 ************************************ 00:19:23.327 14:45:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:23.327 14:45:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.327 14:45:22 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:19:23.327 14:45:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:23.327 14:45:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:23.327 14:45:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:23.327 ************************************ 00:19:23.327 START TEST raid5f_superblock_test 00:19:23.327 ************************************ 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84516 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84516 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84516 ']' 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:23.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:23.327 14:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.327 [2024-11-04 14:45:22.254370] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:19:23.327 [2024-11-04 14:45:22.254545] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84516 ] 00:19:23.327 [2024-11-04 14:45:22.442299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.585 [2024-11-04 14:45:22.596131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.843 [2024-11-04 14:45:22.801343] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.843 [2024-11-04 14:45:22.801416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.409 malloc1 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.409 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.409 [2024-11-04 14:45:23.299147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:24.409 [2024-11-04 14:45:23.299239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.409 [2024-11-04 14:45:23.299273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:24.409 [2024-11-04 14:45:23.299289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.409 [2024-11-04 14:45:23.302079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.410 [2024-11-04 14:45:23.302124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:24.410 pt1 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.410 malloc2 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.410 [2024-11-04 14:45:23.355334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:24.410 [2024-11-04 14:45:23.355538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.410 [2024-11-04 14:45:23.355581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:24.410 [2024-11-04 14:45:23.355597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.410 [2024-11-04 14:45:23.358363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.410 [2024-11-04 14:45:23.358409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:24.410 pt2 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.410 malloc3 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.410 [2024-11-04 14:45:23.424006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:24.410 [2024-11-04 14:45:23.424070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.410 [2024-11-04 14:45:23.424103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:24.410 [2024-11-04 14:45:23.424119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.410 [2024-11-04 14:45:23.426852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.410 [2024-11-04 14:45:23.427047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:24.410 pt3 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.410 malloc4 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.410 [2024-11-04 14:45:23.480684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:24.410 [2024-11-04 14:45:23.480758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.410 [2024-11-04 14:45:23.480791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:24.410 [2024-11-04 14:45:23.480806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.410 [2024-11-04 14:45:23.483549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.410 [2024-11-04 14:45:23.483594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:24.410 pt4 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.410 [2024-11-04 14:45:23.492738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:24.410 [2024-11-04 14:45:23.495141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:24.410 [2024-11-04 14:45:23.495233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:24.410 [2024-11-04 14:45:23.495330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:24.410 [2024-11-04 14:45:23.495601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:24.410 [2024-11-04 14:45:23.495626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:24.410 [2024-11-04 14:45:23.495988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:24.410 [2024-11-04 14:45:23.502679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:24.410 [2024-11-04 14:45:23.502838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:24.410 [2024-11-04 14:45:23.503112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.410 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.668 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.668 "name": "raid_bdev1", 00:19:24.668 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:24.668 "strip_size_kb": 64, 00:19:24.668 "state": "online", 00:19:24.668 "raid_level": "raid5f", 00:19:24.668 "superblock": true, 00:19:24.668 "num_base_bdevs": 4, 00:19:24.668 "num_base_bdevs_discovered": 4, 00:19:24.668 "num_base_bdevs_operational": 4, 00:19:24.669 "base_bdevs_list": [ 00:19:24.669 { 00:19:24.669 "name": "pt1", 00:19:24.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:24.669 "is_configured": true, 00:19:24.669 "data_offset": 2048, 00:19:24.669 "data_size": 63488 00:19:24.669 }, 00:19:24.669 { 00:19:24.669 "name": "pt2", 00:19:24.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:24.669 "is_configured": true, 00:19:24.669 "data_offset": 2048, 00:19:24.669 "data_size": 63488 00:19:24.669 }, 00:19:24.669 { 00:19:24.669 "name": "pt3", 00:19:24.669 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:24.669 "is_configured": true, 00:19:24.669 "data_offset": 2048, 00:19:24.669 "data_size": 63488 00:19:24.669 }, 00:19:24.669 { 00:19:24.669 "name": "pt4", 00:19:24.669 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:24.669 "is_configured": true, 00:19:24.669 "data_offset": 2048, 00:19:24.669 "data_size": 63488 00:19:24.669 } 00:19:24.669 ] 00:19:24.669 }' 00:19:24.669 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.669 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.926 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:24.926 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:24.926 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:24.926 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:24.926 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:24.926 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:24.926 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:24.927 14:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:24.927 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.927 14:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.927 [2024-11-04 14:45:23.998809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.927 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.927 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:24.927 "name": "raid_bdev1", 00:19:24.927 "aliases": [ 00:19:24.927 "133d30a9-5379-43bc-a22b-90499a5b178e" 00:19:24.927 ], 00:19:24.927 "product_name": "Raid Volume", 00:19:24.927 "block_size": 512, 00:19:24.927 "num_blocks": 190464, 00:19:24.927 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:24.927 "assigned_rate_limits": { 00:19:24.927 "rw_ios_per_sec": 0, 00:19:24.927 "rw_mbytes_per_sec": 0, 00:19:24.927 "r_mbytes_per_sec": 0, 00:19:24.927 "w_mbytes_per_sec": 0 00:19:24.927 }, 00:19:24.927 "claimed": false, 00:19:24.927 "zoned": false, 00:19:24.927 "supported_io_types": { 00:19:24.927 "read": true, 00:19:24.927 "write": true, 00:19:24.927 "unmap": false, 00:19:24.927 "flush": false, 00:19:24.927 "reset": true, 00:19:24.927 "nvme_admin": false, 00:19:24.927 "nvme_io": false, 00:19:24.927 "nvme_io_md": false, 00:19:24.927 "write_zeroes": true, 00:19:24.927 "zcopy": false, 00:19:24.927 "get_zone_info": false, 00:19:24.927 "zone_management": false, 00:19:24.927 "zone_append": false, 00:19:24.927 "compare": false, 00:19:24.927 "compare_and_write": false, 00:19:24.927 "abort": false, 00:19:24.927 "seek_hole": false, 00:19:24.927 "seek_data": false, 00:19:24.927 "copy": false, 00:19:24.927 "nvme_iov_md": false 00:19:24.927 }, 00:19:24.927 "driver_specific": { 00:19:24.927 "raid": { 00:19:24.927 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:24.927 "strip_size_kb": 64, 00:19:24.927 "state": "online", 00:19:24.927 "raid_level": "raid5f", 00:19:24.927 "superblock": true, 00:19:24.927 "num_base_bdevs": 4, 00:19:24.927 "num_base_bdevs_discovered": 4, 00:19:24.927 "num_base_bdevs_operational": 4, 00:19:24.927 "base_bdevs_list": [ 00:19:24.927 { 00:19:24.927 "name": "pt1", 00:19:24.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:24.927 "is_configured": true, 00:19:24.927 "data_offset": 2048, 00:19:24.927 "data_size": 63488 00:19:24.927 }, 00:19:24.927 { 00:19:24.927 "name": "pt2", 00:19:24.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:24.927 "is_configured": true, 00:19:24.927 "data_offset": 2048, 00:19:24.927 "data_size": 63488 00:19:24.927 }, 00:19:24.927 { 00:19:24.927 "name": "pt3", 00:19:24.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:24.927 "is_configured": true, 00:19:24.927 "data_offset": 2048, 00:19:24.927 "data_size": 63488 00:19:24.927 }, 00:19:24.927 { 00:19:24.927 "name": "pt4", 00:19:24.927 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:24.927 "is_configured": true, 00:19:24.927 "data_offset": 2048, 00:19:24.927 "data_size": 63488 00:19:24.927 } 00:19:24.927 ] 00:19:24.927 } 00:19:24.927 } 00:19:24.927 }' 00:19:24.927 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:25.185 pt2 00:19:25.185 pt3 00:19:25.185 pt4' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.185 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:25.443 [2024-11-04 14:45:24.358842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=133d30a9-5379-43bc-a22b-90499a5b178e 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 133d30a9-5379-43bc-a22b-90499a5b178e ']' 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.443 [2024-11-04 14:45:24.406627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.443 [2024-11-04 14:45:24.406657] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:25.443 [2024-11-04 14:45:24.406749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.443 [2024-11-04 14:45:24.406851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.443 [2024-11-04 14:45:24.406873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:25.443 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.444 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.703 [2024-11-04 14:45:24.562734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:25.703 [2024-11-04 14:45:24.565308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:25.703 [2024-11-04 14:45:24.565529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:25.703 [2024-11-04 14:45:24.565775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:25.703 [2024-11-04 14:45:24.566064] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:25.703 [2024-11-04 14:45:24.566313] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:25.703 [2024-11-04 14:45:24.566515] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:25.703 [2024-11-04 14:45:24.566684] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:19:25.703 [2024-11-04 14:45:24.566970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.703 [2024-11-04 14:45:24.567187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:25.703 request: 00:19:25.703 { 00:19:25.703 "name": "raid_bdev1", 00:19:25.703 "raid_level": "raid5f", 00:19:25.703 "base_bdevs": [ 00:19:25.703 "malloc1", 00:19:25.703 "malloc2", 00:19:25.703 "malloc3", 00:19:25.703 "malloc4" 00:19:25.703 ], 00:19:25.703 "strip_size_kb": 64, 00:19:25.703 "superblock": false, 00:19:25.703 "method": "bdev_raid_create", 00:19:25.703 "req_id": 1 00:19:25.703 } 00:19:25.703 Got JSON-RPC error response 00:19:25.703 response: 00:19:25.703 { 00:19:25.703 "code": -17, 00:19:25.703 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:25.703 } 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.703 [2024-11-04 14:45:24.635497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:25.703 [2024-11-04 14:45:24.635694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.703 [2024-11-04 14:45:24.635766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:25.703 [2024-11-04 14:45:24.635887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.703 [2024-11-04 14:45:24.638668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.703 [2024-11-04 14:45:24.638721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:25.703 [2024-11-04 14:45:24.638824] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:25.703 [2024-11-04 14:45:24.638902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:25.703 pt1 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.703 "name": "raid_bdev1", 00:19:25.703 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:25.703 "strip_size_kb": 64, 00:19:25.703 "state": "configuring", 00:19:25.703 "raid_level": "raid5f", 00:19:25.703 "superblock": true, 00:19:25.703 "num_base_bdevs": 4, 00:19:25.703 "num_base_bdevs_discovered": 1, 00:19:25.703 "num_base_bdevs_operational": 4, 00:19:25.703 "base_bdevs_list": [ 00:19:25.703 { 00:19:25.703 "name": "pt1", 00:19:25.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:25.703 "is_configured": true, 00:19:25.703 "data_offset": 2048, 00:19:25.703 "data_size": 63488 00:19:25.703 }, 00:19:25.703 { 00:19:25.703 "name": null, 00:19:25.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:25.703 "is_configured": false, 00:19:25.703 "data_offset": 2048, 00:19:25.703 "data_size": 63488 00:19:25.703 }, 00:19:25.703 { 00:19:25.703 "name": null, 00:19:25.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:25.703 "is_configured": false, 00:19:25.703 "data_offset": 2048, 00:19:25.703 "data_size": 63488 00:19:25.703 }, 00:19:25.703 { 00:19:25.703 "name": null, 00:19:25.703 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:25.703 "is_configured": false, 00:19:25.703 "data_offset": 2048, 00:19:25.703 "data_size": 63488 00:19:25.703 } 00:19:25.703 ] 00:19:25.703 }' 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.703 14:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.269 [2024-11-04 14:45:25.159641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:26.269 [2024-11-04 14:45:25.159731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.269 [2024-11-04 14:45:25.159761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:26.269 [2024-11-04 14:45:25.159780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.269 [2024-11-04 14:45:25.160328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.269 [2024-11-04 14:45:25.160360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:26.269 [2024-11-04 14:45:25.160456] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:26.269 [2024-11-04 14:45:25.160493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:26.269 pt2 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.269 [2024-11-04 14:45:25.167627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.269 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.269 "name": "raid_bdev1", 00:19:26.269 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:26.269 "strip_size_kb": 64, 00:19:26.269 "state": "configuring", 00:19:26.269 "raid_level": "raid5f", 00:19:26.269 "superblock": true, 00:19:26.269 "num_base_bdevs": 4, 00:19:26.269 "num_base_bdevs_discovered": 1, 00:19:26.269 "num_base_bdevs_operational": 4, 00:19:26.269 "base_bdevs_list": [ 00:19:26.270 { 00:19:26.270 "name": "pt1", 00:19:26.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:26.270 "is_configured": true, 00:19:26.270 "data_offset": 2048, 00:19:26.270 "data_size": 63488 00:19:26.270 }, 00:19:26.270 { 00:19:26.270 "name": null, 00:19:26.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:26.270 "is_configured": false, 00:19:26.270 "data_offset": 0, 00:19:26.270 "data_size": 63488 00:19:26.270 }, 00:19:26.270 { 00:19:26.270 "name": null, 00:19:26.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:26.270 "is_configured": false, 00:19:26.270 "data_offset": 2048, 00:19:26.270 "data_size": 63488 00:19:26.270 }, 00:19:26.270 { 00:19:26.270 "name": null, 00:19:26.270 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:26.270 "is_configured": false, 00:19:26.270 "data_offset": 2048, 00:19:26.270 "data_size": 63488 00:19:26.270 } 00:19:26.270 ] 00:19:26.270 }' 00:19:26.270 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.270 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.835 [2024-11-04 14:45:25.679763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:26.835 [2024-11-04 14:45:25.679854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.835 [2024-11-04 14:45:25.679886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:26.835 [2024-11-04 14:45:25.679900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.835 [2024-11-04 14:45:25.680496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.835 [2024-11-04 14:45:25.680528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:26.835 [2024-11-04 14:45:25.680631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:26.835 [2024-11-04 14:45:25.680678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:26.835 pt2 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.835 [2024-11-04 14:45:25.691732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:26.835 [2024-11-04 14:45:25.691804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.835 [2024-11-04 14:45:25.691831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:26.835 [2024-11-04 14:45:25.691845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.835 [2024-11-04 14:45:25.692322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.835 [2024-11-04 14:45:25.692353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:26.835 [2024-11-04 14:45:25.692433] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:26.835 [2024-11-04 14:45:25.692460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:26.835 pt3 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.835 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.835 [2024-11-04 14:45:25.699718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:26.835 [2024-11-04 14:45:25.699916] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.835 [2024-11-04 14:45:25.699978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:26.835 [2024-11-04 14:45:25.699994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.835 [2024-11-04 14:45:25.700470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.835 [2024-11-04 14:45:25.700505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:26.835 [2024-11-04 14:45:25.700585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:26.835 [2024-11-04 14:45:25.700612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:26.835 [2024-11-04 14:45:25.700797] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:26.835 [2024-11-04 14:45:25.700813] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:26.835 [2024-11-04 14:45:25.701132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:26.835 [2024-11-04 14:45:25.707553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:26.835 [2024-11-04 14:45:25.707583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:26.836 [2024-11-04 14:45:25.707802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.836 pt4 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.836 "name": "raid_bdev1", 00:19:26.836 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:26.836 "strip_size_kb": 64, 00:19:26.836 "state": "online", 00:19:26.836 "raid_level": "raid5f", 00:19:26.836 "superblock": true, 00:19:26.836 "num_base_bdevs": 4, 00:19:26.836 "num_base_bdevs_discovered": 4, 00:19:26.836 "num_base_bdevs_operational": 4, 00:19:26.836 "base_bdevs_list": [ 00:19:26.836 { 00:19:26.836 "name": "pt1", 00:19:26.836 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:26.836 "is_configured": true, 00:19:26.836 "data_offset": 2048, 00:19:26.836 "data_size": 63488 00:19:26.836 }, 00:19:26.836 { 00:19:26.836 "name": "pt2", 00:19:26.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:26.836 "is_configured": true, 00:19:26.836 "data_offset": 2048, 00:19:26.836 "data_size": 63488 00:19:26.836 }, 00:19:26.836 { 00:19:26.836 "name": "pt3", 00:19:26.836 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:26.836 "is_configured": true, 00:19:26.836 "data_offset": 2048, 00:19:26.836 "data_size": 63488 00:19:26.836 }, 00:19:26.836 { 00:19:26.836 "name": "pt4", 00:19:26.836 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:26.836 "is_configured": true, 00:19:26.836 "data_offset": 2048, 00:19:26.836 "data_size": 63488 00:19:26.836 } 00:19:26.836 ] 00:19:26.836 }' 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.836 14:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.406 [2024-11-04 14:45:26.243519] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:27.406 "name": "raid_bdev1", 00:19:27.406 "aliases": [ 00:19:27.406 "133d30a9-5379-43bc-a22b-90499a5b178e" 00:19:27.406 ], 00:19:27.406 "product_name": "Raid Volume", 00:19:27.406 "block_size": 512, 00:19:27.406 "num_blocks": 190464, 00:19:27.406 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:27.406 "assigned_rate_limits": { 00:19:27.406 "rw_ios_per_sec": 0, 00:19:27.406 "rw_mbytes_per_sec": 0, 00:19:27.406 "r_mbytes_per_sec": 0, 00:19:27.406 "w_mbytes_per_sec": 0 00:19:27.406 }, 00:19:27.406 "claimed": false, 00:19:27.406 "zoned": false, 00:19:27.406 "supported_io_types": { 00:19:27.406 "read": true, 00:19:27.406 "write": true, 00:19:27.406 "unmap": false, 00:19:27.406 "flush": false, 00:19:27.406 "reset": true, 00:19:27.406 "nvme_admin": false, 00:19:27.406 "nvme_io": false, 00:19:27.406 "nvme_io_md": false, 00:19:27.406 "write_zeroes": true, 00:19:27.406 "zcopy": false, 00:19:27.406 "get_zone_info": false, 00:19:27.406 "zone_management": false, 00:19:27.406 "zone_append": false, 00:19:27.406 "compare": false, 00:19:27.406 "compare_and_write": false, 00:19:27.406 "abort": false, 00:19:27.406 "seek_hole": false, 00:19:27.406 "seek_data": false, 00:19:27.406 "copy": false, 00:19:27.406 "nvme_iov_md": false 00:19:27.406 }, 00:19:27.406 "driver_specific": { 00:19:27.406 "raid": { 00:19:27.406 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:27.406 "strip_size_kb": 64, 00:19:27.406 "state": "online", 00:19:27.406 "raid_level": "raid5f", 00:19:27.406 "superblock": true, 00:19:27.406 "num_base_bdevs": 4, 00:19:27.406 "num_base_bdevs_discovered": 4, 00:19:27.406 "num_base_bdevs_operational": 4, 00:19:27.406 "base_bdevs_list": [ 00:19:27.406 { 00:19:27.406 "name": "pt1", 00:19:27.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.406 "is_configured": true, 00:19:27.406 "data_offset": 2048, 00:19:27.406 "data_size": 63488 00:19:27.406 }, 00:19:27.406 { 00:19:27.406 "name": "pt2", 00:19:27.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.406 "is_configured": true, 00:19:27.406 "data_offset": 2048, 00:19:27.406 "data_size": 63488 00:19:27.406 }, 00:19:27.406 { 00:19:27.406 "name": "pt3", 00:19:27.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:27.406 "is_configured": true, 00:19:27.406 "data_offset": 2048, 00:19:27.406 "data_size": 63488 00:19:27.406 }, 00:19:27.406 { 00:19:27.406 "name": "pt4", 00:19:27.406 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:27.406 "is_configured": true, 00:19:27.406 "data_offset": 2048, 00:19:27.406 "data_size": 63488 00:19:27.406 } 00:19:27.406 ] 00:19:27.406 } 00:19:27.406 } 00:19:27.406 }' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:27.406 pt2 00:19:27.406 pt3 00:19:27.406 pt4' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.406 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.665 [2024-11-04 14:45:26.599557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 133d30a9-5379-43bc-a22b-90499a5b178e '!=' 133d30a9-5379-43bc-a22b-90499a5b178e ']' 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.665 [2024-11-04 14:45:26.655415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.665 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.665 "name": "raid_bdev1", 00:19:27.665 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:27.665 "strip_size_kb": 64, 00:19:27.665 "state": "online", 00:19:27.665 "raid_level": "raid5f", 00:19:27.665 "superblock": true, 00:19:27.665 "num_base_bdevs": 4, 00:19:27.665 "num_base_bdevs_discovered": 3, 00:19:27.665 "num_base_bdevs_operational": 3, 00:19:27.665 "base_bdevs_list": [ 00:19:27.665 { 00:19:27.665 "name": null, 00:19:27.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.665 "is_configured": false, 00:19:27.665 "data_offset": 0, 00:19:27.665 "data_size": 63488 00:19:27.665 }, 00:19:27.665 { 00:19:27.665 "name": "pt2", 00:19:27.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.665 "is_configured": true, 00:19:27.665 "data_offset": 2048, 00:19:27.665 "data_size": 63488 00:19:27.665 }, 00:19:27.665 { 00:19:27.665 "name": "pt3", 00:19:27.665 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:27.665 "is_configured": true, 00:19:27.665 "data_offset": 2048, 00:19:27.665 "data_size": 63488 00:19:27.665 }, 00:19:27.665 { 00:19:27.665 "name": "pt4", 00:19:27.665 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:27.665 "is_configured": true, 00:19:27.666 "data_offset": 2048, 00:19:27.666 "data_size": 63488 00:19:27.666 } 00:19:27.666 ] 00:19:27.666 }' 00:19:27.666 14:45:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.666 14:45:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.233 [2024-11-04 14:45:27.171493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:28.233 [2024-11-04 14:45:27.171547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.233 [2024-11-04 14:45:27.171641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.233 [2024-11-04 14:45:27.171753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:28.233 [2024-11-04 14:45:27.171768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:28.233 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.234 [2024-11-04 14:45:27.255505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:28.234 [2024-11-04 14:45:27.255719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.234 [2024-11-04 14:45:27.255761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:28.234 [2024-11-04 14:45:27.255777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.234 [2024-11-04 14:45:27.258704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.234 [2024-11-04 14:45:27.258893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:28.234 [2024-11-04 14:45:27.259028] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:28.234 [2024-11-04 14:45:27.259089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.234 pt2 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.234 "name": "raid_bdev1", 00:19:28.234 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:28.234 "strip_size_kb": 64, 00:19:28.234 "state": "configuring", 00:19:28.234 "raid_level": "raid5f", 00:19:28.234 "superblock": true, 00:19:28.234 "num_base_bdevs": 4, 00:19:28.234 "num_base_bdevs_discovered": 1, 00:19:28.234 "num_base_bdevs_operational": 3, 00:19:28.234 "base_bdevs_list": [ 00:19:28.234 { 00:19:28.234 "name": null, 00:19:28.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.234 "is_configured": false, 00:19:28.234 "data_offset": 2048, 00:19:28.234 "data_size": 63488 00:19:28.234 }, 00:19:28.234 { 00:19:28.234 "name": "pt2", 00:19:28.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.234 "is_configured": true, 00:19:28.234 "data_offset": 2048, 00:19:28.234 "data_size": 63488 00:19:28.234 }, 00:19:28.234 { 00:19:28.234 "name": null, 00:19:28.234 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:28.234 "is_configured": false, 00:19:28.234 "data_offset": 2048, 00:19:28.234 "data_size": 63488 00:19:28.234 }, 00:19:28.234 { 00:19:28.234 "name": null, 00:19:28.234 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:28.234 "is_configured": false, 00:19:28.234 "data_offset": 2048, 00:19:28.234 "data_size": 63488 00:19:28.234 } 00:19:28.234 ] 00:19:28.234 }' 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.234 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.801 [2024-11-04 14:45:27.775669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:28.801 [2024-11-04 14:45:27.775746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.801 [2024-11-04 14:45:27.775781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:28.801 [2024-11-04 14:45:27.775797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.801 [2024-11-04 14:45:27.776373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.801 [2024-11-04 14:45:27.776404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:28.801 [2024-11-04 14:45:27.776509] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:28.801 [2024-11-04 14:45:27.776548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:28.801 pt3 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.801 "name": "raid_bdev1", 00:19:28.801 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:28.801 "strip_size_kb": 64, 00:19:28.801 "state": "configuring", 00:19:28.801 "raid_level": "raid5f", 00:19:28.801 "superblock": true, 00:19:28.801 "num_base_bdevs": 4, 00:19:28.801 "num_base_bdevs_discovered": 2, 00:19:28.801 "num_base_bdevs_operational": 3, 00:19:28.801 "base_bdevs_list": [ 00:19:28.801 { 00:19:28.801 "name": null, 00:19:28.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.801 "is_configured": false, 00:19:28.801 "data_offset": 2048, 00:19:28.801 "data_size": 63488 00:19:28.801 }, 00:19:28.801 { 00:19:28.801 "name": "pt2", 00:19:28.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:28.801 "is_configured": true, 00:19:28.801 "data_offset": 2048, 00:19:28.801 "data_size": 63488 00:19:28.801 }, 00:19:28.801 { 00:19:28.801 "name": "pt3", 00:19:28.801 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:28.801 "is_configured": true, 00:19:28.801 "data_offset": 2048, 00:19:28.801 "data_size": 63488 00:19:28.801 }, 00:19:28.801 { 00:19:28.801 "name": null, 00:19:28.801 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:28.801 "is_configured": false, 00:19:28.801 "data_offset": 2048, 00:19:28.801 "data_size": 63488 00:19:28.801 } 00:19:28.801 ] 00:19:28.801 }' 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.801 14:45:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.385 [2024-11-04 14:45:28.295840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:29.385 [2024-11-04 14:45:28.295917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.385 [2024-11-04 14:45:28.295965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:29.385 [2024-11-04 14:45:28.295982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.385 [2024-11-04 14:45:28.296544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.385 [2024-11-04 14:45:28.296575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:29.385 [2024-11-04 14:45:28.296677] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:29.385 [2024-11-04 14:45:28.296709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:29.385 [2024-11-04 14:45:28.296876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:29.385 [2024-11-04 14:45:28.296892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:29.385 [2024-11-04 14:45:28.297211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:29.385 [2024-11-04 14:45:28.303637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:29.385 [2024-11-04 14:45:28.303668] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:29.385 [2024-11-04 14:45:28.304044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.385 pt4 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.385 "name": "raid_bdev1", 00:19:29.385 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:29.385 "strip_size_kb": 64, 00:19:29.385 "state": "online", 00:19:29.385 "raid_level": "raid5f", 00:19:29.385 "superblock": true, 00:19:29.385 "num_base_bdevs": 4, 00:19:29.385 "num_base_bdevs_discovered": 3, 00:19:29.385 "num_base_bdevs_operational": 3, 00:19:29.385 "base_bdevs_list": [ 00:19:29.385 { 00:19:29.385 "name": null, 00:19:29.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.385 "is_configured": false, 00:19:29.385 "data_offset": 2048, 00:19:29.385 "data_size": 63488 00:19:29.385 }, 00:19:29.385 { 00:19:29.385 "name": "pt2", 00:19:29.385 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.385 "is_configured": true, 00:19:29.385 "data_offset": 2048, 00:19:29.385 "data_size": 63488 00:19:29.385 }, 00:19:29.385 { 00:19:29.385 "name": "pt3", 00:19:29.385 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:29.385 "is_configured": true, 00:19:29.385 "data_offset": 2048, 00:19:29.385 "data_size": 63488 00:19:29.385 }, 00:19:29.385 { 00:19:29.385 "name": "pt4", 00:19:29.385 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:29.385 "is_configured": true, 00:19:29.385 "data_offset": 2048, 00:19:29.385 "data_size": 63488 00:19:29.385 } 00:19:29.385 ] 00:19:29.385 }' 00:19:29.385 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.386 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.696 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:29.696 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.696 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.696 [2024-11-04 14:45:28.791397] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.696 [2024-11-04 14:45:28.791430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.697 [2024-11-04 14:45:28.791560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.697 [2024-11-04 14:45:28.791647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.697 [2024-11-04 14:45:28.791666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:29.697 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.697 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:29.697 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.697 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.697 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.697 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.955 [2024-11-04 14:45:28.859390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:29.955 [2024-11-04 14:45:28.859463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.955 [2024-11-04 14:45:28.859508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:29.955 [2024-11-04 14:45:28.859526] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.955 [2024-11-04 14:45:28.862452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.955 [2024-11-04 14:45:28.862664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:29.955 [2024-11-04 14:45:28.862778] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:29.955 [2024-11-04 14:45:28.862849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:29.955 [2024-11-04 14:45:28.863044] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:29.955 [2024-11-04 14:45:28.863068] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.955 [2024-11-04 14:45:28.863090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:29.955 [2024-11-04 14:45:28.863162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:29.955 [2024-11-04 14:45:28.863313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:29.955 pt1 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.955 "name": "raid_bdev1", 00:19:29.955 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:29.955 "strip_size_kb": 64, 00:19:29.955 "state": "configuring", 00:19:29.955 "raid_level": "raid5f", 00:19:29.955 "superblock": true, 00:19:29.955 "num_base_bdevs": 4, 00:19:29.955 "num_base_bdevs_discovered": 2, 00:19:29.955 "num_base_bdevs_operational": 3, 00:19:29.955 "base_bdevs_list": [ 00:19:29.955 { 00:19:29.955 "name": null, 00:19:29.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.955 "is_configured": false, 00:19:29.955 "data_offset": 2048, 00:19:29.955 "data_size": 63488 00:19:29.955 }, 00:19:29.955 { 00:19:29.955 "name": "pt2", 00:19:29.955 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.955 "is_configured": true, 00:19:29.955 "data_offset": 2048, 00:19:29.955 "data_size": 63488 00:19:29.955 }, 00:19:29.955 { 00:19:29.955 "name": "pt3", 00:19:29.955 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:29.955 "is_configured": true, 00:19:29.955 "data_offset": 2048, 00:19:29.955 "data_size": 63488 00:19:29.955 }, 00:19:29.955 { 00:19:29.955 "name": null, 00:19:29.955 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:29.955 "is_configured": false, 00:19:29.955 "data_offset": 2048, 00:19:29.955 "data_size": 63488 00:19:29.955 } 00:19:29.955 ] 00:19:29.955 }' 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.955 14:45:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 [2024-11-04 14:45:29.435674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:30.522 [2024-11-04 14:45:29.435766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.522 [2024-11-04 14:45:29.435804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:30.522 [2024-11-04 14:45:29.435820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.522 [2024-11-04 14:45:29.436373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.522 [2024-11-04 14:45:29.436405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:30.522 [2024-11-04 14:45:29.436516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:30.522 [2024-11-04 14:45:29.436557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:30.522 [2024-11-04 14:45:29.436729] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:30.522 [2024-11-04 14:45:29.436746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:30.522 [2024-11-04 14:45:29.437066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:30.522 [2024-11-04 14:45:29.443546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:30.522 [2024-11-04 14:45:29.443578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:30.522 [2024-11-04 14:45:29.443910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.522 pt4 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.522 "name": "raid_bdev1", 00:19:30.522 "uuid": "133d30a9-5379-43bc-a22b-90499a5b178e", 00:19:30.522 "strip_size_kb": 64, 00:19:30.522 "state": "online", 00:19:30.522 "raid_level": "raid5f", 00:19:30.522 "superblock": true, 00:19:30.522 "num_base_bdevs": 4, 00:19:30.522 "num_base_bdevs_discovered": 3, 00:19:30.522 "num_base_bdevs_operational": 3, 00:19:30.522 "base_bdevs_list": [ 00:19:30.522 { 00:19:30.522 "name": null, 00:19:30.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.522 "is_configured": false, 00:19:30.522 "data_offset": 2048, 00:19:30.522 "data_size": 63488 00:19:30.522 }, 00:19:30.522 { 00:19:30.522 "name": "pt2", 00:19:30.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.522 "is_configured": true, 00:19:30.522 "data_offset": 2048, 00:19:30.522 "data_size": 63488 00:19:30.522 }, 00:19:30.522 { 00:19:30.522 "name": "pt3", 00:19:30.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:30.522 "is_configured": true, 00:19:30.522 "data_offset": 2048, 00:19:30.522 "data_size": 63488 00:19:30.522 }, 00:19:30.522 { 00:19:30.522 "name": "pt4", 00:19:30.522 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:30.522 "is_configured": true, 00:19:30.522 "data_offset": 2048, 00:19:30.522 "data_size": 63488 00:19:30.522 } 00:19:30.522 ] 00:19:30.522 }' 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.522 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 [2024-11-04 14:45:29.979621] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.087 14:45:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 133d30a9-5379-43bc-a22b-90499a5b178e '!=' 133d30a9-5379-43bc-a22b-90499a5b178e ']' 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84516 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84516 ']' 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84516 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84516 00:19:31.087 killing process with pid 84516 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84516' 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84516 00:19:31.087 [2024-11-04 14:45:30.058391] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.087 14:45:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84516 00:19:31.087 [2024-11-04 14:45:30.058499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.087 [2024-11-04 14:45:30.058602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.087 [2024-11-04 14:45:30.058622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:31.343 [2024-11-04 14:45:30.413788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.728 14:45:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:32.728 00:19:32.728 real 0m9.288s 00:19:32.728 user 0m15.277s 00:19:32.728 sys 0m1.320s 00:19:32.728 14:45:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:32.728 ************************************ 00:19:32.728 END TEST raid5f_superblock_test 00:19:32.728 ************************************ 00:19:32.728 14:45:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.728 14:45:31 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:32.728 14:45:31 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:19:32.728 14:45:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:32.728 14:45:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:32.728 14:45:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.728 ************************************ 00:19:32.728 START TEST raid5f_rebuild_test 00:19:32.728 ************************************ 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:32.728 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:32.729 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85007 00:19:32.729 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:32.729 14:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85007 00:19:32.729 14:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 85007 ']' 00:19:32.729 14:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.729 14:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:32.729 14:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.729 14:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:32.729 14:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.729 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:32.729 Zero copy mechanism will not be used. 00:19:32.729 [2024-11-04 14:45:31.592974] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:19:32.729 [2024-11-04 14:45:31.593129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85007 ] 00:19:32.729 [2024-11-04 14:45:31.766709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.987 [2024-11-04 14:45:31.893639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.987 [2024-11-04 14:45:32.085641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:32.987 [2024-11-04 14:45:32.085699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.553 BaseBdev1_malloc 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.553 [2024-11-04 14:45:32.609305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:33.553 [2024-11-04 14:45:32.609391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.553 [2024-11-04 14:45:32.609425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:33.553 [2024-11-04 14:45:32.609444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.553 [2024-11-04 14:45:32.612218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.553 [2024-11-04 14:45:32.612268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:33.553 BaseBdev1 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.553 BaseBdev2_malloc 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.553 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.554 [2024-11-04 14:45:32.657106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:33.554 [2024-11-04 14:45:32.657185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.554 [2024-11-04 14:45:32.657214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:33.554 [2024-11-04 14:45:32.657233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.554 [2024-11-04 14:45:32.659963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.554 [2024-11-04 14:45:32.660014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:33.554 BaseBdev2 00:19:33.554 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.554 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:33.554 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:33.554 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.554 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.812 BaseBdev3_malloc 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.812 [2024-11-04 14:45:32.733756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:33.812 [2024-11-04 14:45:32.733885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.812 [2024-11-04 14:45:32.733969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:33.812 [2024-11-04 14:45:32.734021] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.812 [2024-11-04 14:45:32.737824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.812 [2024-11-04 14:45:32.737898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:33.812 BaseBdev3 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.812 BaseBdev4_malloc 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.812 [2024-11-04 14:45:32.798895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:33.812 [2024-11-04 14:45:32.799005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.812 [2024-11-04 14:45:32.799037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:33.812 [2024-11-04 14:45:32.799056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.812 [2024-11-04 14:45:32.801853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.812 [2024-11-04 14:45:32.801909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:33.812 BaseBdev4 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.812 spare_malloc 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.812 spare_delay 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.812 [2024-11-04 14:45:32.862655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:33.812 [2024-11-04 14:45:32.862733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.812 [2024-11-04 14:45:32.862765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:33.812 [2024-11-04 14:45:32.862781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.812 [2024-11-04 14:45:32.865541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.812 [2024-11-04 14:45:32.865591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:33.812 spare 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.812 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.812 [2024-11-04 14:45:32.870714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:33.812 [2024-11-04 14:45:32.873087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.813 [2024-11-04 14:45:32.873173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:33.813 [2024-11-04 14:45:32.873253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:33.813 [2024-11-04 14:45:32.873377] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:33.813 [2024-11-04 14:45:32.873398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:33.813 [2024-11-04 14:45:32.873724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:33.813 [2024-11-04 14:45:32.880441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:33.813 [2024-11-04 14:45:32.880467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:33.813 [2024-11-04 14:45:32.880742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.813 "name": "raid_bdev1", 00:19:33.813 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:33.813 "strip_size_kb": 64, 00:19:33.813 "state": "online", 00:19:33.813 "raid_level": "raid5f", 00:19:33.813 "superblock": false, 00:19:33.813 "num_base_bdevs": 4, 00:19:33.813 "num_base_bdevs_discovered": 4, 00:19:33.813 "num_base_bdevs_operational": 4, 00:19:33.813 "base_bdevs_list": [ 00:19:33.813 { 00:19:33.813 "name": "BaseBdev1", 00:19:33.813 "uuid": "a3abdc70-cc4a-5330-ba40-c07acf4daebc", 00:19:33.813 "is_configured": true, 00:19:33.813 "data_offset": 0, 00:19:33.813 "data_size": 65536 00:19:33.813 }, 00:19:33.813 { 00:19:33.813 "name": "BaseBdev2", 00:19:33.813 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:33.813 "is_configured": true, 00:19:33.813 "data_offset": 0, 00:19:33.813 "data_size": 65536 00:19:33.813 }, 00:19:33.813 { 00:19:33.813 "name": "BaseBdev3", 00:19:33.813 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:33.813 "is_configured": true, 00:19:33.813 "data_offset": 0, 00:19:33.813 "data_size": 65536 00:19:33.813 }, 00:19:33.813 { 00:19:33.813 "name": "BaseBdev4", 00:19:33.813 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:33.813 "is_configured": true, 00:19:33.813 "data_offset": 0, 00:19:33.813 "data_size": 65536 00:19:33.813 } 00:19:33.813 ] 00:19:33.813 }' 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.813 14:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:34.379 [2024-11-04 14:45:33.388549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:34.379 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:34.947 [2024-11-04 14:45:33.836463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:34.947 /dev/nbd0 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:34.947 1+0 records in 00:19:34.947 1+0 records out 00:19:34.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322133 s, 12.7 MB/s 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:34.947 14:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:19:35.512 512+0 records in 00:19:35.512 512+0 records out 00:19:35.512 100663296 bytes (101 MB, 96 MiB) copied, 0.598334 s, 168 MB/s 00:19:35.512 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:35.512 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:35.513 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:35.513 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:35.513 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:35.513 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:35.513 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:35.771 [2024-11-04 14:45:34.781222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.771 [2024-11-04 14:45:34.812809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.771 "name": "raid_bdev1", 00:19:35.771 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:35.771 "strip_size_kb": 64, 00:19:35.771 "state": "online", 00:19:35.771 "raid_level": "raid5f", 00:19:35.771 "superblock": false, 00:19:35.771 "num_base_bdevs": 4, 00:19:35.771 "num_base_bdevs_discovered": 3, 00:19:35.771 "num_base_bdevs_operational": 3, 00:19:35.771 "base_bdevs_list": [ 00:19:35.771 { 00:19:35.771 "name": null, 00:19:35.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.771 "is_configured": false, 00:19:35.771 "data_offset": 0, 00:19:35.771 "data_size": 65536 00:19:35.771 }, 00:19:35.771 { 00:19:35.771 "name": "BaseBdev2", 00:19:35.771 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:35.771 "is_configured": true, 00:19:35.771 "data_offset": 0, 00:19:35.771 "data_size": 65536 00:19:35.771 }, 00:19:35.771 { 00:19:35.771 "name": "BaseBdev3", 00:19:35.771 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:35.771 "is_configured": true, 00:19:35.771 "data_offset": 0, 00:19:35.771 "data_size": 65536 00:19:35.771 }, 00:19:35.771 { 00:19:35.771 "name": "BaseBdev4", 00:19:35.771 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:35.771 "is_configured": true, 00:19:35.771 "data_offset": 0, 00:19:35.771 "data_size": 65536 00:19:35.771 } 00:19:35.771 ] 00:19:35.771 }' 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.771 14:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.338 14:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:36.338 14:45:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.338 14:45:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.338 [2024-11-04 14:45:35.336921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:36.338 [2024-11-04 14:45:35.351604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:36.338 14:45:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.338 14:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:36.338 [2024-11-04 14:45:35.361021] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:37.274 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.274 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.274 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.274 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.274 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.274 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.274 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.274 14:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.274 14:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.274 14:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.532 "name": "raid_bdev1", 00:19:37.532 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:37.532 "strip_size_kb": 64, 00:19:37.532 "state": "online", 00:19:37.532 "raid_level": "raid5f", 00:19:37.532 "superblock": false, 00:19:37.532 "num_base_bdevs": 4, 00:19:37.532 "num_base_bdevs_discovered": 4, 00:19:37.532 "num_base_bdevs_operational": 4, 00:19:37.532 "process": { 00:19:37.532 "type": "rebuild", 00:19:37.532 "target": "spare", 00:19:37.532 "progress": { 00:19:37.532 "blocks": 17280, 00:19:37.532 "percent": 8 00:19:37.532 } 00:19:37.532 }, 00:19:37.532 "base_bdevs_list": [ 00:19:37.532 { 00:19:37.532 "name": "spare", 00:19:37.532 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:37.532 "is_configured": true, 00:19:37.532 "data_offset": 0, 00:19:37.532 "data_size": 65536 00:19:37.532 }, 00:19:37.532 { 00:19:37.532 "name": "BaseBdev2", 00:19:37.532 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:37.532 "is_configured": true, 00:19:37.532 "data_offset": 0, 00:19:37.532 "data_size": 65536 00:19:37.532 }, 00:19:37.532 { 00:19:37.532 "name": "BaseBdev3", 00:19:37.532 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:37.532 "is_configured": true, 00:19:37.532 "data_offset": 0, 00:19:37.532 "data_size": 65536 00:19:37.532 }, 00:19:37.532 { 00:19:37.532 "name": "BaseBdev4", 00:19:37.532 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:37.532 "is_configured": true, 00:19:37.532 "data_offset": 0, 00:19:37.532 "data_size": 65536 00:19:37.532 } 00:19:37.532 ] 00:19:37.532 }' 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.532 [2024-11-04 14:45:36.538481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.532 [2024-11-04 14:45:36.573181] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:37.532 [2024-11-04 14:45:36.573511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.532 [2024-11-04 14:45:36.573543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:37.532 [2024-11-04 14:45:36.573560] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.532 14:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.791 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.791 "name": "raid_bdev1", 00:19:37.791 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:37.791 "strip_size_kb": 64, 00:19:37.791 "state": "online", 00:19:37.791 "raid_level": "raid5f", 00:19:37.791 "superblock": false, 00:19:37.791 "num_base_bdevs": 4, 00:19:37.791 "num_base_bdevs_discovered": 3, 00:19:37.791 "num_base_bdevs_operational": 3, 00:19:37.791 "base_bdevs_list": [ 00:19:37.791 { 00:19:37.791 "name": null, 00:19:37.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.791 "is_configured": false, 00:19:37.791 "data_offset": 0, 00:19:37.791 "data_size": 65536 00:19:37.791 }, 00:19:37.791 { 00:19:37.791 "name": "BaseBdev2", 00:19:37.791 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:37.791 "is_configured": true, 00:19:37.791 "data_offset": 0, 00:19:37.791 "data_size": 65536 00:19:37.791 }, 00:19:37.791 { 00:19:37.791 "name": "BaseBdev3", 00:19:37.791 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:37.791 "is_configured": true, 00:19:37.791 "data_offset": 0, 00:19:37.791 "data_size": 65536 00:19:37.791 }, 00:19:37.791 { 00:19:37.791 "name": "BaseBdev4", 00:19:37.791 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:37.791 "is_configured": true, 00:19:37.791 "data_offset": 0, 00:19:37.791 "data_size": 65536 00:19:37.791 } 00:19:37.791 ] 00:19:37.791 }' 00:19:37.791 14:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.791 14:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.049 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.049 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.049 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.049 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.049 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.049 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.049 14:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.049 14:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.049 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.049 14:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.309 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.309 "name": "raid_bdev1", 00:19:38.309 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:38.309 "strip_size_kb": 64, 00:19:38.309 "state": "online", 00:19:38.309 "raid_level": "raid5f", 00:19:38.309 "superblock": false, 00:19:38.309 "num_base_bdevs": 4, 00:19:38.309 "num_base_bdevs_discovered": 3, 00:19:38.309 "num_base_bdevs_operational": 3, 00:19:38.309 "base_bdevs_list": [ 00:19:38.309 { 00:19:38.309 "name": null, 00:19:38.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.309 "is_configured": false, 00:19:38.309 "data_offset": 0, 00:19:38.309 "data_size": 65536 00:19:38.309 }, 00:19:38.309 { 00:19:38.309 "name": "BaseBdev2", 00:19:38.309 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:38.309 "is_configured": true, 00:19:38.309 "data_offset": 0, 00:19:38.309 "data_size": 65536 00:19:38.309 }, 00:19:38.309 { 00:19:38.309 "name": "BaseBdev3", 00:19:38.309 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:38.309 "is_configured": true, 00:19:38.309 "data_offset": 0, 00:19:38.309 "data_size": 65536 00:19:38.309 }, 00:19:38.309 { 00:19:38.309 "name": "BaseBdev4", 00:19:38.309 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:38.309 "is_configured": true, 00:19:38.309 "data_offset": 0, 00:19:38.309 "data_size": 65536 00:19:38.309 } 00:19:38.309 ] 00:19:38.309 }' 00:19:38.309 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.309 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.309 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.309 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.309 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:38.309 14:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.309 14:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.309 [2024-11-04 14:45:37.288498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:38.309 [2024-11-04 14:45:37.301957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:19:38.309 14:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.309 14:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:38.309 [2024-11-04 14:45:37.310757] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.243 "name": "raid_bdev1", 00:19:39.243 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:39.243 "strip_size_kb": 64, 00:19:39.243 "state": "online", 00:19:39.243 "raid_level": "raid5f", 00:19:39.243 "superblock": false, 00:19:39.243 "num_base_bdevs": 4, 00:19:39.243 "num_base_bdevs_discovered": 4, 00:19:39.243 "num_base_bdevs_operational": 4, 00:19:39.243 "process": { 00:19:39.243 "type": "rebuild", 00:19:39.243 "target": "spare", 00:19:39.243 "progress": { 00:19:39.243 "blocks": 17280, 00:19:39.243 "percent": 8 00:19:39.243 } 00:19:39.243 }, 00:19:39.243 "base_bdevs_list": [ 00:19:39.243 { 00:19:39.243 "name": "spare", 00:19:39.243 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:39.243 "is_configured": true, 00:19:39.243 "data_offset": 0, 00:19:39.243 "data_size": 65536 00:19:39.243 }, 00:19:39.243 { 00:19:39.243 "name": "BaseBdev2", 00:19:39.243 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:39.243 "is_configured": true, 00:19:39.243 "data_offset": 0, 00:19:39.243 "data_size": 65536 00:19:39.243 }, 00:19:39.243 { 00:19:39.243 "name": "BaseBdev3", 00:19:39.243 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:39.243 "is_configured": true, 00:19:39.243 "data_offset": 0, 00:19:39.243 "data_size": 65536 00:19:39.243 }, 00:19:39.243 { 00:19:39.243 "name": "BaseBdev4", 00:19:39.243 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:39.243 "is_configured": true, 00:19:39.243 "data_offset": 0, 00:19:39.243 "data_size": 65536 00:19:39.243 } 00:19:39.243 ] 00:19:39.243 }' 00:19:39.243 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=671 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:39.502 "name": "raid_bdev1", 00:19:39.502 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:39.502 "strip_size_kb": 64, 00:19:39.502 "state": "online", 00:19:39.502 "raid_level": "raid5f", 00:19:39.502 "superblock": false, 00:19:39.502 "num_base_bdevs": 4, 00:19:39.502 "num_base_bdevs_discovered": 4, 00:19:39.502 "num_base_bdevs_operational": 4, 00:19:39.502 "process": { 00:19:39.502 "type": "rebuild", 00:19:39.502 "target": "spare", 00:19:39.502 "progress": { 00:19:39.502 "blocks": 21120, 00:19:39.502 "percent": 10 00:19:39.502 } 00:19:39.502 }, 00:19:39.502 "base_bdevs_list": [ 00:19:39.502 { 00:19:39.502 "name": "spare", 00:19:39.502 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:39.502 "is_configured": true, 00:19:39.502 "data_offset": 0, 00:19:39.502 "data_size": 65536 00:19:39.502 }, 00:19:39.502 { 00:19:39.502 "name": "BaseBdev2", 00:19:39.502 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:39.502 "is_configured": true, 00:19:39.502 "data_offset": 0, 00:19:39.502 "data_size": 65536 00:19:39.502 }, 00:19:39.502 { 00:19:39.502 "name": "BaseBdev3", 00:19:39.502 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:39.502 "is_configured": true, 00:19:39.502 "data_offset": 0, 00:19:39.502 "data_size": 65536 00:19:39.502 }, 00:19:39.502 { 00:19:39.502 "name": "BaseBdev4", 00:19:39.502 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:39.502 "is_configured": true, 00:19:39.502 "data_offset": 0, 00:19:39.502 "data_size": 65536 00:19:39.502 } 00:19:39.502 ] 00:19:39.502 }' 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:39.502 14:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:40.879 "name": "raid_bdev1", 00:19:40.879 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:40.879 "strip_size_kb": 64, 00:19:40.879 "state": "online", 00:19:40.879 "raid_level": "raid5f", 00:19:40.879 "superblock": false, 00:19:40.879 "num_base_bdevs": 4, 00:19:40.879 "num_base_bdevs_discovered": 4, 00:19:40.879 "num_base_bdevs_operational": 4, 00:19:40.879 "process": { 00:19:40.879 "type": "rebuild", 00:19:40.879 "target": "spare", 00:19:40.879 "progress": { 00:19:40.879 "blocks": 42240, 00:19:40.879 "percent": 21 00:19:40.879 } 00:19:40.879 }, 00:19:40.879 "base_bdevs_list": [ 00:19:40.879 { 00:19:40.879 "name": "spare", 00:19:40.879 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:40.879 "is_configured": true, 00:19:40.879 "data_offset": 0, 00:19:40.879 "data_size": 65536 00:19:40.879 }, 00:19:40.879 { 00:19:40.879 "name": "BaseBdev2", 00:19:40.879 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:40.879 "is_configured": true, 00:19:40.879 "data_offset": 0, 00:19:40.879 "data_size": 65536 00:19:40.879 }, 00:19:40.879 { 00:19:40.879 "name": "BaseBdev3", 00:19:40.879 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:40.879 "is_configured": true, 00:19:40.879 "data_offset": 0, 00:19:40.879 "data_size": 65536 00:19:40.879 }, 00:19:40.879 { 00:19:40.879 "name": "BaseBdev4", 00:19:40.879 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:40.879 "is_configured": true, 00:19:40.879 "data_offset": 0, 00:19:40.879 "data_size": 65536 00:19:40.879 } 00:19:40.879 ] 00:19:40.879 }' 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:40.879 14:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.816 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:41.816 "name": "raid_bdev1", 00:19:41.816 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:41.816 "strip_size_kb": 64, 00:19:41.816 "state": "online", 00:19:41.816 "raid_level": "raid5f", 00:19:41.816 "superblock": false, 00:19:41.816 "num_base_bdevs": 4, 00:19:41.817 "num_base_bdevs_discovered": 4, 00:19:41.817 "num_base_bdevs_operational": 4, 00:19:41.817 "process": { 00:19:41.817 "type": "rebuild", 00:19:41.817 "target": "spare", 00:19:41.817 "progress": { 00:19:41.817 "blocks": 65280, 00:19:41.817 "percent": 33 00:19:41.817 } 00:19:41.817 }, 00:19:41.817 "base_bdevs_list": [ 00:19:41.817 { 00:19:41.817 "name": "spare", 00:19:41.817 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:41.817 "is_configured": true, 00:19:41.817 "data_offset": 0, 00:19:41.817 "data_size": 65536 00:19:41.817 }, 00:19:41.817 { 00:19:41.817 "name": "BaseBdev2", 00:19:41.817 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:41.817 "is_configured": true, 00:19:41.817 "data_offset": 0, 00:19:41.817 "data_size": 65536 00:19:41.817 }, 00:19:41.817 { 00:19:41.817 "name": "BaseBdev3", 00:19:41.817 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:41.817 "is_configured": true, 00:19:41.817 "data_offset": 0, 00:19:41.817 "data_size": 65536 00:19:41.817 }, 00:19:41.817 { 00:19:41.817 "name": "BaseBdev4", 00:19:41.817 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:41.817 "is_configured": true, 00:19:41.817 "data_offset": 0, 00:19:41.817 "data_size": 65536 00:19:41.817 } 00:19:41.817 ] 00:19:41.817 }' 00:19:41.817 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:41.817 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:41.817 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:41.817 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:41.817 14:45:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.191 "name": "raid_bdev1", 00:19:43.191 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:43.191 "strip_size_kb": 64, 00:19:43.191 "state": "online", 00:19:43.191 "raid_level": "raid5f", 00:19:43.191 "superblock": false, 00:19:43.191 "num_base_bdevs": 4, 00:19:43.191 "num_base_bdevs_discovered": 4, 00:19:43.191 "num_base_bdevs_operational": 4, 00:19:43.191 "process": { 00:19:43.191 "type": "rebuild", 00:19:43.191 "target": "spare", 00:19:43.191 "progress": { 00:19:43.191 "blocks": 86400, 00:19:43.191 "percent": 43 00:19:43.191 } 00:19:43.191 }, 00:19:43.191 "base_bdevs_list": [ 00:19:43.191 { 00:19:43.191 "name": "spare", 00:19:43.191 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:43.191 "is_configured": true, 00:19:43.191 "data_offset": 0, 00:19:43.191 "data_size": 65536 00:19:43.191 }, 00:19:43.191 { 00:19:43.191 "name": "BaseBdev2", 00:19:43.191 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:43.191 "is_configured": true, 00:19:43.191 "data_offset": 0, 00:19:43.191 "data_size": 65536 00:19:43.191 }, 00:19:43.191 { 00:19:43.191 "name": "BaseBdev3", 00:19:43.191 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:43.191 "is_configured": true, 00:19:43.191 "data_offset": 0, 00:19:43.191 "data_size": 65536 00:19:43.191 }, 00:19:43.191 { 00:19:43.191 "name": "BaseBdev4", 00:19:43.191 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:43.191 "is_configured": true, 00:19:43.191 "data_offset": 0, 00:19:43.191 "data_size": 65536 00:19:43.191 } 00:19:43.191 ] 00:19:43.191 }' 00:19:43.191 14:45:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.191 14:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.191 14:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.191 14:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.191 14:45:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.202 "name": "raid_bdev1", 00:19:44.202 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:44.202 "strip_size_kb": 64, 00:19:44.202 "state": "online", 00:19:44.202 "raid_level": "raid5f", 00:19:44.202 "superblock": false, 00:19:44.202 "num_base_bdevs": 4, 00:19:44.202 "num_base_bdevs_discovered": 4, 00:19:44.202 "num_base_bdevs_operational": 4, 00:19:44.202 "process": { 00:19:44.202 "type": "rebuild", 00:19:44.202 "target": "spare", 00:19:44.202 "progress": { 00:19:44.202 "blocks": 109440, 00:19:44.202 "percent": 55 00:19:44.202 } 00:19:44.202 }, 00:19:44.202 "base_bdevs_list": [ 00:19:44.202 { 00:19:44.202 "name": "spare", 00:19:44.202 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:44.202 "is_configured": true, 00:19:44.202 "data_offset": 0, 00:19:44.202 "data_size": 65536 00:19:44.202 }, 00:19:44.202 { 00:19:44.202 "name": "BaseBdev2", 00:19:44.202 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:44.202 "is_configured": true, 00:19:44.202 "data_offset": 0, 00:19:44.202 "data_size": 65536 00:19:44.202 }, 00:19:44.202 { 00:19:44.202 "name": "BaseBdev3", 00:19:44.202 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:44.202 "is_configured": true, 00:19:44.202 "data_offset": 0, 00:19:44.202 "data_size": 65536 00:19:44.202 }, 00:19:44.202 { 00:19:44.202 "name": "BaseBdev4", 00:19:44.202 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:44.202 "is_configured": true, 00:19:44.202 "data_offset": 0, 00:19:44.202 "data_size": 65536 00:19:44.202 } 00:19:44.202 ] 00:19:44.202 }' 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:44.202 14:45:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.138 14:45:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.397 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.397 "name": "raid_bdev1", 00:19:45.397 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:45.397 "strip_size_kb": 64, 00:19:45.397 "state": "online", 00:19:45.397 "raid_level": "raid5f", 00:19:45.397 "superblock": false, 00:19:45.397 "num_base_bdevs": 4, 00:19:45.397 "num_base_bdevs_discovered": 4, 00:19:45.397 "num_base_bdevs_operational": 4, 00:19:45.397 "process": { 00:19:45.397 "type": "rebuild", 00:19:45.397 "target": "spare", 00:19:45.397 "progress": { 00:19:45.397 "blocks": 130560, 00:19:45.397 "percent": 66 00:19:45.397 } 00:19:45.397 }, 00:19:45.397 "base_bdevs_list": [ 00:19:45.397 { 00:19:45.397 "name": "spare", 00:19:45.397 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:45.397 "is_configured": true, 00:19:45.397 "data_offset": 0, 00:19:45.397 "data_size": 65536 00:19:45.397 }, 00:19:45.397 { 00:19:45.397 "name": "BaseBdev2", 00:19:45.397 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:45.397 "is_configured": true, 00:19:45.397 "data_offset": 0, 00:19:45.397 "data_size": 65536 00:19:45.397 }, 00:19:45.397 { 00:19:45.397 "name": "BaseBdev3", 00:19:45.397 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:45.397 "is_configured": true, 00:19:45.397 "data_offset": 0, 00:19:45.397 "data_size": 65536 00:19:45.397 }, 00:19:45.397 { 00:19:45.397 "name": "BaseBdev4", 00:19:45.397 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:45.397 "is_configured": true, 00:19:45.397 "data_offset": 0, 00:19:45.397 "data_size": 65536 00:19:45.397 } 00:19:45.397 ] 00:19:45.397 }' 00:19:45.397 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.397 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.397 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.397 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.397 14:45:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.333 14:45:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.591 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.591 "name": "raid_bdev1", 00:19:46.591 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:46.591 "strip_size_kb": 64, 00:19:46.591 "state": "online", 00:19:46.591 "raid_level": "raid5f", 00:19:46.591 "superblock": false, 00:19:46.591 "num_base_bdevs": 4, 00:19:46.591 "num_base_bdevs_discovered": 4, 00:19:46.591 "num_base_bdevs_operational": 4, 00:19:46.591 "process": { 00:19:46.591 "type": "rebuild", 00:19:46.591 "target": "spare", 00:19:46.591 "progress": { 00:19:46.591 "blocks": 153600, 00:19:46.591 "percent": 78 00:19:46.592 } 00:19:46.592 }, 00:19:46.592 "base_bdevs_list": [ 00:19:46.592 { 00:19:46.592 "name": "spare", 00:19:46.592 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:46.592 "is_configured": true, 00:19:46.592 "data_offset": 0, 00:19:46.592 "data_size": 65536 00:19:46.592 }, 00:19:46.592 { 00:19:46.592 "name": "BaseBdev2", 00:19:46.592 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:46.592 "is_configured": true, 00:19:46.592 "data_offset": 0, 00:19:46.592 "data_size": 65536 00:19:46.592 }, 00:19:46.592 { 00:19:46.592 "name": "BaseBdev3", 00:19:46.592 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:46.592 "is_configured": true, 00:19:46.592 "data_offset": 0, 00:19:46.592 "data_size": 65536 00:19:46.592 }, 00:19:46.592 { 00:19:46.592 "name": "BaseBdev4", 00:19:46.592 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:46.592 "is_configured": true, 00:19:46.592 "data_offset": 0, 00:19:46.592 "data_size": 65536 00:19:46.592 } 00:19:46.592 ] 00:19:46.592 }' 00:19:46.592 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.592 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.592 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.592 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.592 14:45:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.526 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.526 "name": "raid_bdev1", 00:19:47.526 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:47.526 "strip_size_kb": 64, 00:19:47.526 "state": "online", 00:19:47.526 "raid_level": "raid5f", 00:19:47.526 "superblock": false, 00:19:47.526 "num_base_bdevs": 4, 00:19:47.526 "num_base_bdevs_discovered": 4, 00:19:47.526 "num_base_bdevs_operational": 4, 00:19:47.526 "process": { 00:19:47.526 "type": "rebuild", 00:19:47.526 "target": "spare", 00:19:47.526 "progress": { 00:19:47.526 "blocks": 174720, 00:19:47.526 "percent": 88 00:19:47.526 } 00:19:47.526 }, 00:19:47.526 "base_bdevs_list": [ 00:19:47.526 { 00:19:47.526 "name": "spare", 00:19:47.526 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:47.526 "is_configured": true, 00:19:47.526 "data_offset": 0, 00:19:47.526 "data_size": 65536 00:19:47.526 }, 00:19:47.526 { 00:19:47.526 "name": "BaseBdev2", 00:19:47.526 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:47.526 "is_configured": true, 00:19:47.526 "data_offset": 0, 00:19:47.526 "data_size": 65536 00:19:47.526 }, 00:19:47.526 { 00:19:47.526 "name": "BaseBdev3", 00:19:47.527 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:47.527 "is_configured": true, 00:19:47.527 "data_offset": 0, 00:19:47.527 "data_size": 65536 00:19:47.527 }, 00:19:47.527 { 00:19:47.527 "name": "BaseBdev4", 00:19:47.527 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:47.527 "is_configured": true, 00:19:47.527 "data_offset": 0, 00:19:47.527 "data_size": 65536 00:19:47.527 } 00:19:47.527 ] 00:19:47.527 }' 00:19:47.527 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.785 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.785 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.785 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.785 14:45:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:48.720 [2024-11-04 14:45:47.717620] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:48.720 [2024-11-04 14:45:47.717726] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:48.720 [2024-11-04 14:45:47.717793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.720 "name": "raid_bdev1", 00:19:48.720 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:48.720 "strip_size_kb": 64, 00:19:48.720 "state": "online", 00:19:48.720 "raid_level": "raid5f", 00:19:48.720 "superblock": false, 00:19:48.720 "num_base_bdevs": 4, 00:19:48.720 "num_base_bdevs_discovered": 4, 00:19:48.720 "num_base_bdevs_operational": 4, 00:19:48.720 "base_bdevs_list": [ 00:19:48.720 { 00:19:48.720 "name": "spare", 00:19:48.720 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:48.720 "is_configured": true, 00:19:48.720 "data_offset": 0, 00:19:48.720 "data_size": 65536 00:19:48.720 }, 00:19:48.720 { 00:19:48.720 "name": "BaseBdev2", 00:19:48.720 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:48.720 "is_configured": true, 00:19:48.720 "data_offset": 0, 00:19:48.720 "data_size": 65536 00:19:48.720 }, 00:19:48.720 { 00:19:48.720 "name": "BaseBdev3", 00:19:48.720 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:48.720 "is_configured": true, 00:19:48.720 "data_offset": 0, 00:19:48.720 "data_size": 65536 00:19:48.720 }, 00:19:48.720 { 00:19:48.720 "name": "BaseBdev4", 00:19:48.720 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:48.720 "is_configured": true, 00:19:48.720 "data_offset": 0, 00:19:48.720 "data_size": 65536 00:19:48.720 } 00:19:48.720 ] 00:19:48.720 }' 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:48.720 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.979 "name": "raid_bdev1", 00:19:48.979 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:48.979 "strip_size_kb": 64, 00:19:48.979 "state": "online", 00:19:48.979 "raid_level": "raid5f", 00:19:48.979 "superblock": false, 00:19:48.979 "num_base_bdevs": 4, 00:19:48.979 "num_base_bdevs_discovered": 4, 00:19:48.979 "num_base_bdevs_operational": 4, 00:19:48.979 "base_bdevs_list": [ 00:19:48.979 { 00:19:48.979 "name": "spare", 00:19:48.979 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:48.979 "is_configured": true, 00:19:48.979 "data_offset": 0, 00:19:48.979 "data_size": 65536 00:19:48.979 }, 00:19:48.979 { 00:19:48.979 "name": "BaseBdev2", 00:19:48.979 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:48.979 "is_configured": true, 00:19:48.979 "data_offset": 0, 00:19:48.979 "data_size": 65536 00:19:48.979 }, 00:19:48.979 { 00:19:48.979 "name": "BaseBdev3", 00:19:48.979 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:48.979 "is_configured": true, 00:19:48.979 "data_offset": 0, 00:19:48.979 "data_size": 65536 00:19:48.979 }, 00:19:48.979 { 00:19:48.979 "name": "BaseBdev4", 00:19:48.979 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:48.979 "is_configured": true, 00:19:48.979 "data_offset": 0, 00:19:48.979 "data_size": 65536 00:19:48.979 } 00:19:48.979 ] 00:19:48.979 }' 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:48.979 14:45:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.979 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.238 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.238 "name": "raid_bdev1", 00:19:49.238 "uuid": "7ad2d8a4-89da-40fd-9049-40b61722f20a", 00:19:49.238 "strip_size_kb": 64, 00:19:49.238 "state": "online", 00:19:49.238 "raid_level": "raid5f", 00:19:49.238 "superblock": false, 00:19:49.238 "num_base_bdevs": 4, 00:19:49.238 "num_base_bdevs_discovered": 4, 00:19:49.238 "num_base_bdevs_operational": 4, 00:19:49.238 "base_bdevs_list": [ 00:19:49.238 { 00:19:49.238 "name": "spare", 00:19:49.238 "uuid": "54ede1a4-1090-51bc-8f8c-ce537382f286", 00:19:49.238 "is_configured": true, 00:19:49.238 "data_offset": 0, 00:19:49.238 "data_size": 65536 00:19:49.238 }, 00:19:49.238 { 00:19:49.238 "name": "BaseBdev2", 00:19:49.238 "uuid": "3012b10f-11df-52e2-bb88-1a2f7ae2adee", 00:19:49.238 "is_configured": true, 00:19:49.238 "data_offset": 0, 00:19:49.238 "data_size": 65536 00:19:49.238 }, 00:19:49.238 { 00:19:49.238 "name": "BaseBdev3", 00:19:49.238 "uuid": "40193542-9d9f-55fa-8f99-9aa9c1ce5e27", 00:19:49.238 "is_configured": true, 00:19:49.238 "data_offset": 0, 00:19:49.238 "data_size": 65536 00:19:49.238 }, 00:19:49.238 { 00:19:49.238 "name": "BaseBdev4", 00:19:49.238 "uuid": "ffe911ca-d840-59b7-bb0f-e40fc9d70151", 00:19:49.238 "is_configured": true, 00:19:49.238 "data_offset": 0, 00:19:49.238 "data_size": 65536 00:19:49.238 } 00:19:49.238 ] 00:19:49.238 }' 00:19:49.238 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.238 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.497 [2024-11-04 14:45:48.517063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:49.497 [2024-11-04 14:45:48.517110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:49.497 [2024-11-04 14:45:48.517207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.497 [2024-11-04 14:45:48.517327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.497 [2024-11-04 14:45:48.517344] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:49.497 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:49.757 /dev/nbd0 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:49.757 1+0 records in 00:19:49.757 1+0 records out 00:19:49.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299341 s, 13.7 MB/s 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:49.757 14:45:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:50.015 /dev/nbd1 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:50.274 1+0 records in 00:19:50.274 1+0 records out 00:19:50.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384398 s, 10.7 MB/s 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:50.274 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:50.841 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:50.841 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:50.841 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:50.841 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:50.841 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:50.841 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:50.841 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:50.841 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:50.841 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:50.841 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85007 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 85007 ']' 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 85007 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:19:51.099 14:45:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.099 14:45:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85007 00:19:51.099 killing process with pid 85007 00:19:51.099 Received shutdown signal, test time was about 60.000000 seconds 00:19:51.099 00:19:51.099 Latency(us) 00:19:51.099 [2024-11-04T14:45:50.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.099 [2024-11-04T14:45:50.222Z] =================================================================================================================== 00:19:51.099 [2024-11-04T14:45:50.222Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.099 14:45:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:51.099 14:45:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:51.099 14:45:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85007' 00:19:51.099 14:45:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 85007 00:19:51.099 14:45:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 85007 00:19:51.099 [2024-11-04 14:45:50.025528] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:51.357 [2024-11-04 14:45:50.472552] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:52.733 00:19:52.733 real 0m20.020s 00:19:52.733 user 0m24.919s 00:19:52.733 sys 0m2.172s 00:19:52.733 ************************************ 00:19:52.733 END TEST raid5f_rebuild_test 00:19:52.733 ************************************ 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.733 14:45:51 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:19:52.733 14:45:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:52.733 14:45:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:52.733 14:45:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:52.733 ************************************ 00:19:52.733 START TEST raid5f_rebuild_test_sb 00:19:52.733 ************************************ 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:52.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85519 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85519 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85519 ']' 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:52.733 14:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.733 [2024-11-04 14:45:51.693340] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:19:52.733 [2024-11-04 14:45:51.693769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85519 ] 00:19:52.733 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:52.733 Zero copy mechanism will not be used. 00:19:52.994 [2024-11-04 14:45:51.877871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.994 [2024-11-04 14:45:52.012365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.257 [2024-11-04 14:45:52.221400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.257 [2024-11-04 14:45:52.221637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.825 BaseBdev1_malloc 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.825 [2024-11-04 14:45:52.753186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:53.825 [2024-11-04 14:45:52.753273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.825 [2024-11-04 14:45:52.753308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:53.825 [2024-11-04 14:45:52.753328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.825 [2024-11-04 14:45:52.756151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.825 [2024-11-04 14:45:52.756344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:53.825 BaseBdev1 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.825 BaseBdev2_malloc 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.825 [2024-11-04 14:45:52.809121] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:53.825 [2024-11-04 14:45:52.809202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.825 [2024-11-04 14:45:52.809233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:53.825 [2024-11-04 14:45:52.809254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.825 [2024-11-04 14:45:52.812019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.825 [2024-11-04 14:45:52.812065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:53.825 BaseBdev2 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.825 BaseBdev3_malloc 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.825 [2024-11-04 14:45:52.873872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:53.825 [2024-11-04 14:45:52.873965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.825 [2024-11-04 14:45:52.874002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:53.825 [2024-11-04 14:45:52.874034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.825 [2024-11-04 14:45:52.876816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.825 [2024-11-04 14:45:52.876869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:53.825 BaseBdev3 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.825 BaseBdev4_malloc 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.825 [2024-11-04 14:45:52.930004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:53.825 [2024-11-04 14:45:52.930106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.825 [2024-11-04 14:45:52.930138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:53.825 [2024-11-04 14:45:52.930156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.825 [2024-11-04 14:45:52.932959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.825 [2024-11-04 14:45:52.933009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:53.825 BaseBdev4 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.825 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.085 spare_malloc 00:19:54.085 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.085 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:54.085 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.085 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.085 spare_delay 00:19:54.085 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.085 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:54.085 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.085 14:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.085 [2024-11-04 14:45:52.997937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:54.085 [2024-11-04 14:45:52.998040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.085 [2024-11-04 14:45:52.998078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:54.085 [2024-11-04 14:45:52.998104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.085 [2024-11-04 14:45:53.001018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.085 [2024-11-04 14:45:53.001070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:54.085 spare 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.085 [2024-11-04 14:45:53.010045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.085 [2024-11-04 14:45:53.012521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.085 [2024-11-04 14:45:53.012744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:54.085 [2024-11-04 14:45:53.012843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:54.085 [2024-11-04 14:45:53.013152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:54.085 [2024-11-04 14:45:53.013179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:54.085 [2024-11-04 14:45:53.013540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:54.085 [2024-11-04 14:45:53.020409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:54.085 [2024-11-04 14:45:53.020563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:54.085 [2024-11-04 14:45:53.021036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.085 "name": "raid_bdev1", 00:19:54.085 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:19:54.085 "strip_size_kb": 64, 00:19:54.085 "state": "online", 00:19:54.085 "raid_level": "raid5f", 00:19:54.085 "superblock": true, 00:19:54.085 "num_base_bdevs": 4, 00:19:54.085 "num_base_bdevs_discovered": 4, 00:19:54.085 "num_base_bdevs_operational": 4, 00:19:54.085 "base_bdevs_list": [ 00:19:54.085 { 00:19:54.085 "name": "BaseBdev1", 00:19:54.085 "uuid": "84afda23-d80a-5b73-b08b-0b72422f237d", 00:19:54.085 "is_configured": true, 00:19:54.085 "data_offset": 2048, 00:19:54.085 "data_size": 63488 00:19:54.085 }, 00:19:54.085 { 00:19:54.085 "name": "BaseBdev2", 00:19:54.085 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:19:54.085 "is_configured": true, 00:19:54.085 "data_offset": 2048, 00:19:54.085 "data_size": 63488 00:19:54.085 }, 00:19:54.085 { 00:19:54.085 "name": "BaseBdev3", 00:19:54.085 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:19:54.085 "is_configured": true, 00:19:54.085 "data_offset": 2048, 00:19:54.085 "data_size": 63488 00:19:54.085 }, 00:19:54.085 { 00:19:54.085 "name": "BaseBdev4", 00:19:54.085 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:19:54.085 "is_configured": true, 00:19:54.085 "data_offset": 2048, 00:19:54.085 "data_size": 63488 00:19:54.085 } 00:19:54.085 ] 00:19:54.085 }' 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.085 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.653 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:54.653 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.653 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:54.654 [2024-11-04 14:45:53.540997] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.654 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:54.912 [2024-11-04 14:45:53.940884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:54.912 /dev/nbd0 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:54.912 14:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:54.912 1+0 records in 00:19:54.912 1+0 records out 00:19:54.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389155 s, 10.5 MB/s 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:54.912 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:19:55.480 496+0 records in 00:19:55.480 496+0 records out 00:19:55.480 97517568 bytes (98 MB, 93 MiB) copied, 0.584434 s, 167 MB/s 00:19:55.480 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:55.480 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:55.480 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:55.480 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:55.480 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:55.480 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:55.481 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:56.047 [2024-11-04 14:45:54.905548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.047 [2024-11-04 14:45:54.917065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.047 "name": "raid_bdev1", 00:19:56.047 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:19:56.047 "strip_size_kb": 64, 00:19:56.047 "state": "online", 00:19:56.047 "raid_level": "raid5f", 00:19:56.047 "superblock": true, 00:19:56.047 "num_base_bdevs": 4, 00:19:56.047 "num_base_bdevs_discovered": 3, 00:19:56.047 "num_base_bdevs_operational": 3, 00:19:56.047 "base_bdevs_list": [ 00:19:56.047 { 00:19:56.047 "name": null, 00:19:56.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.047 "is_configured": false, 00:19:56.047 "data_offset": 0, 00:19:56.047 "data_size": 63488 00:19:56.047 }, 00:19:56.047 { 00:19:56.047 "name": "BaseBdev2", 00:19:56.047 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:19:56.047 "is_configured": true, 00:19:56.047 "data_offset": 2048, 00:19:56.047 "data_size": 63488 00:19:56.047 }, 00:19:56.047 { 00:19:56.047 "name": "BaseBdev3", 00:19:56.047 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:19:56.047 "is_configured": true, 00:19:56.047 "data_offset": 2048, 00:19:56.047 "data_size": 63488 00:19:56.047 }, 00:19:56.047 { 00:19:56.047 "name": "BaseBdev4", 00:19:56.047 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:19:56.047 "is_configured": true, 00:19:56.047 "data_offset": 2048, 00:19:56.047 "data_size": 63488 00:19:56.047 } 00:19:56.047 ] 00:19:56.047 }' 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.047 14:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.613 14:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:56.613 14:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.613 14:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.613 [2024-11-04 14:45:55.437202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:56.613 [2024-11-04 14:45:55.451296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:19:56.613 14:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.613 14:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:56.613 [2024-11-04 14:45:55.460330] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.548 "name": "raid_bdev1", 00:19:57.548 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:19:57.548 "strip_size_kb": 64, 00:19:57.548 "state": "online", 00:19:57.548 "raid_level": "raid5f", 00:19:57.548 "superblock": true, 00:19:57.548 "num_base_bdevs": 4, 00:19:57.548 "num_base_bdevs_discovered": 4, 00:19:57.548 "num_base_bdevs_operational": 4, 00:19:57.548 "process": { 00:19:57.548 "type": "rebuild", 00:19:57.548 "target": "spare", 00:19:57.548 "progress": { 00:19:57.548 "blocks": 17280, 00:19:57.548 "percent": 9 00:19:57.548 } 00:19:57.548 }, 00:19:57.548 "base_bdevs_list": [ 00:19:57.548 { 00:19:57.548 "name": "spare", 00:19:57.548 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:19:57.548 "is_configured": true, 00:19:57.548 "data_offset": 2048, 00:19:57.548 "data_size": 63488 00:19:57.548 }, 00:19:57.548 { 00:19:57.548 "name": "BaseBdev2", 00:19:57.548 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:19:57.548 "is_configured": true, 00:19:57.548 "data_offset": 2048, 00:19:57.548 "data_size": 63488 00:19:57.548 }, 00:19:57.548 { 00:19:57.548 "name": "BaseBdev3", 00:19:57.548 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:19:57.548 "is_configured": true, 00:19:57.548 "data_offset": 2048, 00:19:57.548 "data_size": 63488 00:19:57.548 }, 00:19:57.548 { 00:19:57.548 "name": "BaseBdev4", 00:19:57.548 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:19:57.548 "is_configured": true, 00:19:57.548 "data_offset": 2048, 00:19:57.548 "data_size": 63488 00:19:57.548 } 00:19:57.548 ] 00:19:57.548 }' 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.548 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.548 [2024-11-04 14:45:56.614607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:57.806 [2024-11-04 14:45:56.673745] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:57.806 [2024-11-04 14:45:56.674204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.806 [2024-11-04 14:45:56.674406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:57.806 [2024-11-04 14:45:56.674465] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:57.806 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.807 "name": "raid_bdev1", 00:19:57.807 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:19:57.807 "strip_size_kb": 64, 00:19:57.807 "state": "online", 00:19:57.807 "raid_level": "raid5f", 00:19:57.807 "superblock": true, 00:19:57.807 "num_base_bdevs": 4, 00:19:57.807 "num_base_bdevs_discovered": 3, 00:19:57.807 "num_base_bdevs_operational": 3, 00:19:57.807 "base_bdevs_list": [ 00:19:57.807 { 00:19:57.807 "name": null, 00:19:57.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.807 "is_configured": false, 00:19:57.807 "data_offset": 0, 00:19:57.807 "data_size": 63488 00:19:57.807 }, 00:19:57.807 { 00:19:57.807 "name": "BaseBdev2", 00:19:57.807 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:19:57.807 "is_configured": true, 00:19:57.807 "data_offset": 2048, 00:19:57.807 "data_size": 63488 00:19:57.807 }, 00:19:57.807 { 00:19:57.807 "name": "BaseBdev3", 00:19:57.807 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:19:57.807 "is_configured": true, 00:19:57.807 "data_offset": 2048, 00:19:57.807 "data_size": 63488 00:19:57.807 }, 00:19:57.807 { 00:19:57.807 "name": "BaseBdev4", 00:19:57.807 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:19:57.807 "is_configured": true, 00:19:57.807 "data_offset": 2048, 00:19:57.807 "data_size": 63488 00:19:57.807 } 00:19:57.807 ] 00:19:57.807 }' 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.807 14:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.373 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.373 "name": "raid_bdev1", 00:19:58.373 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:19:58.373 "strip_size_kb": 64, 00:19:58.373 "state": "online", 00:19:58.373 "raid_level": "raid5f", 00:19:58.373 "superblock": true, 00:19:58.373 "num_base_bdevs": 4, 00:19:58.373 "num_base_bdevs_discovered": 3, 00:19:58.373 "num_base_bdevs_operational": 3, 00:19:58.373 "base_bdevs_list": [ 00:19:58.373 { 00:19:58.373 "name": null, 00:19:58.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.373 "is_configured": false, 00:19:58.373 "data_offset": 0, 00:19:58.373 "data_size": 63488 00:19:58.373 }, 00:19:58.373 { 00:19:58.373 "name": "BaseBdev2", 00:19:58.373 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:19:58.373 "is_configured": true, 00:19:58.374 "data_offset": 2048, 00:19:58.374 "data_size": 63488 00:19:58.374 }, 00:19:58.374 { 00:19:58.374 "name": "BaseBdev3", 00:19:58.374 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:19:58.374 "is_configured": true, 00:19:58.374 "data_offset": 2048, 00:19:58.374 "data_size": 63488 00:19:58.374 }, 00:19:58.374 { 00:19:58.374 "name": "BaseBdev4", 00:19:58.374 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:19:58.374 "is_configured": true, 00:19:58.374 "data_offset": 2048, 00:19:58.374 "data_size": 63488 00:19:58.374 } 00:19:58.374 ] 00:19:58.374 }' 00:19:58.374 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.374 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.374 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.374 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.374 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:58.374 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.374 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.374 [2024-11-04 14:45:57.377600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:58.374 [2024-11-04 14:45:57.390892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:19:58.374 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.374 14:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:58.374 [2024-11-04 14:45:57.399710] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:59.309 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.309 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.309 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.309 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.309 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.309 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.309 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.309 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.309 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.309 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.567 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.567 "name": "raid_bdev1", 00:19:59.567 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:19:59.567 "strip_size_kb": 64, 00:19:59.567 "state": "online", 00:19:59.567 "raid_level": "raid5f", 00:19:59.567 "superblock": true, 00:19:59.567 "num_base_bdevs": 4, 00:19:59.567 "num_base_bdevs_discovered": 4, 00:19:59.567 "num_base_bdevs_operational": 4, 00:19:59.567 "process": { 00:19:59.567 "type": "rebuild", 00:19:59.567 "target": "spare", 00:19:59.567 "progress": { 00:19:59.567 "blocks": 17280, 00:19:59.567 "percent": 9 00:19:59.567 } 00:19:59.567 }, 00:19:59.567 "base_bdevs_list": [ 00:19:59.567 { 00:19:59.567 "name": "spare", 00:19:59.567 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:19:59.567 "is_configured": true, 00:19:59.567 "data_offset": 2048, 00:19:59.567 "data_size": 63488 00:19:59.567 }, 00:19:59.567 { 00:19:59.567 "name": "BaseBdev2", 00:19:59.567 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:19:59.567 "is_configured": true, 00:19:59.567 "data_offset": 2048, 00:19:59.567 "data_size": 63488 00:19:59.567 }, 00:19:59.567 { 00:19:59.567 "name": "BaseBdev3", 00:19:59.567 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:19:59.567 "is_configured": true, 00:19:59.567 "data_offset": 2048, 00:19:59.567 "data_size": 63488 00:19:59.567 }, 00:19:59.567 { 00:19:59.568 "name": "BaseBdev4", 00:19:59.568 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:19:59.568 "is_configured": true, 00:19:59.568 "data_offset": 2048, 00:19:59.568 "data_size": 63488 00:19:59.568 } 00:19:59.568 ] 00:19:59.568 }' 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:59.568 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=691 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:59.568 "name": "raid_bdev1", 00:19:59.568 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:19:59.568 "strip_size_kb": 64, 00:19:59.568 "state": "online", 00:19:59.568 "raid_level": "raid5f", 00:19:59.568 "superblock": true, 00:19:59.568 "num_base_bdevs": 4, 00:19:59.568 "num_base_bdevs_discovered": 4, 00:19:59.568 "num_base_bdevs_operational": 4, 00:19:59.568 "process": { 00:19:59.568 "type": "rebuild", 00:19:59.568 "target": "spare", 00:19:59.568 "progress": { 00:19:59.568 "blocks": 21120, 00:19:59.568 "percent": 11 00:19:59.568 } 00:19:59.568 }, 00:19:59.568 "base_bdevs_list": [ 00:19:59.568 { 00:19:59.568 "name": "spare", 00:19:59.568 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:19:59.568 "is_configured": true, 00:19:59.568 "data_offset": 2048, 00:19:59.568 "data_size": 63488 00:19:59.568 }, 00:19:59.568 { 00:19:59.568 "name": "BaseBdev2", 00:19:59.568 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:19:59.568 "is_configured": true, 00:19:59.568 "data_offset": 2048, 00:19:59.568 "data_size": 63488 00:19:59.568 }, 00:19:59.568 { 00:19:59.568 "name": "BaseBdev3", 00:19:59.568 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:19:59.568 "is_configured": true, 00:19:59.568 "data_offset": 2048, 00:19:59.568 "data_size": 63488 00:19:59.568 }, 00:19:59.568 { 00:19:59.568 "name": "BaseBdev4", 00:19:59.568 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:19:59.568 "is_configured": true, 00:19:59.568 "data_offset": 2048, 00:19:59.568 "data_size": 63488 00:19:59.568 } 00:19:59.568 ] 00:19:59.568 }' 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:59.568 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:59.827 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.827 14:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.762 "name": "raid_bdev1", 00:20:00.762 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:00.762 "strip_size_kb": 64, 00:20:00.762 "state": "online", 00:20:00.762 "raid_level": "raid5f", 00:20:00.762 "superblock": true, 00:20:00.762 "num_base_bdevs": 4, 00:20:00.762 "num_base_bdevs_discovered": 4, 00:20:00.762 "num_base_bdevs_operational": 4, 00:20:00.762 "process": { 00:20:00.762 "type": "rebuild", 00:20:00.762 "target": "spare", 00:20:00.762 "progress": { 00:20:00.762 "blocks": 42240, 00:20:00.762 "percent": 22 00:20:00.762 } 00:20:00.762 }, 00:20:00.762 "base_bdevs_list": [ 00:20:00.762 { 00:20:00.762 "name": "spare", 00:20:00.762 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:00.762 "is_configured": true, 00:20:00.762 "data_offset": 2048, 00:20:00.762 "data_size": 63488 00:20:00.762 }, 00:20:00.762 { 00:20:00.762 "name": "BaseBdev2", 00:20:00.762 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:00.762 "is_configured": true, 00:20:00.762 "data_offset": 2048, 00:20:00.762 "data_size": 63488 00:20:00.762 }, 00:20:00.762 { 00:20:00.762 "name": "BaseBdev3", 00:20:00.762 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:00.762 "is_configured": true, 00:20:00.762 "data_offset": 2048, 00:20:00.762 "data_size": 63488 00:20:00.762 }, 00:20:00.762 { 00:20:00.762 "name": "BaseBdev4", 00:20:00.762 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:00.762 "is_configured": true, 00:20:00.762 "data_offset": 2048, 00:20:00.762 "data_size": 63488 00:20:00.762 } 00:20:00.762 ] 00:20:00.762 }' 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.762 14:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.142 "name": "raid_bdev1", 00:20:02.142 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:02.142 "strip_size_kb": 64, 00:20:02.142 "state": "online", 00:20:02.142 "raid_level": "raid5f", 00:20:02.142 "superblock": true, 00:20:02.142 "num_base_bdevs": 4, 00:20:02.142 "num_base_bdevs_discovered": 4, 00:20:02.142 "num_base_bdevs_operational": 4, 00:20:02.142 "process": { 00:20:02.142 "type": "rebuild", 00:20:02.142 "target": "spare", 00:20:02.142 "progress": { 00:20:02.142 "blocks": 65280, 00:20:02.142 "percent": 34 00:20:02.142 } 00:20:02.142 }, 00:20:02.142 "base_bdevs_list": [ 00:20:02.142 { 00:20:02.142 "name": "spare", 00:20:02.142 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:02.142 "is_configured": true, 00:20:02.142 "data_offset": 2048, 00:20:02.142 "data_size": 63488 00:20:02.142 }, 00:20:02.142 { 00:20:02.142 "name": "BaseBdev2", 00:20:02.142 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:02.142 "is_configured": true, 00:20:02.142 "data_offset": 2048, 00:20:02.142 "data_size": 63488 00:20:02.142 }, 00:20:02.142 { 00:20:02.142 "name": "BaseBdev3", 00:20:02.142 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:02.142 "is_configured": true, 00:20:02.142 "data_offset": 2048, 00:20:02.142 "data_size": 63488 00:20:02.142 }, 00:20:02.142 { 00:20:02.142 "name": "BaseBdev4", 00:20:02.142 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:02.142 "is_configured": true, 00:20:02.142 "data_offset": 2048, 00:20:02.142 "data_size": 63488 00:20:02.142 } 00:20:02.142 ] 00:20:02.142 }' 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.142 14:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.142 14:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.142 14:46:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.084 "name": "raid_bdev1", 00:20:03.084 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:03.084 "strip_size_kb": 64, 00:20:03.084 "state": "online", 00:20:03.084 "raid_level": "raid5f", 00:20:03.084 "superblock": true, 00:20:03.084 "num_base_bdevs": 4, 00:20:03.084 "num_base_bdevs_discovered": 4, 00:20:03.084 "num_base_bdevs_operational": 4, 00:20:03.084 "process": { 00:20:03.084 "type": "rebuild", 00:20:03.084 "target": "spare", 00:20:03.084 "progress": { 00:20:03.084 "blocks": 86400, 00:20:03.084 "percent": 45 00:20:03.084 } 00:20:03.084 }, 00:20:03.084 "base_bdevs_list": [ 00:20:03.084 { 00:20:03.084 "name": "spare", 00:20:03.084 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:03.084 "is_configured": true, 00:20:03.084 "data_offset": 2048, 00:20:03.084 "data_size": 63488 00:20:03.084 }, 00:20:03.084 { 00:20:03.084 "name": "BaseBdev2", 00:20:03.084 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:03.084 "is_configured": true, 00:20:03.084 "data_offset": 2048, 00:20:03.084 "data_size": 63488 00:20:03.084 }, 00:20:03.084 { 00:20:03.084 "name": "BaseBdev3", 00:20:03.084 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:03.084 "is_configured": true, 00:20:03.084 "data_offset": 2048, 00:20:03.084 "data_size": 63488 00:20:03.084 }, 00:20:03.084 { 00:20:03.084 "name": "BaseBdev4", 00:20:03.084 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:03.084 "is_configured": true, 00:20:03.084 "data_offset": 2048, 00:20:03.084 "data_size": 63488 00:20:03.084 } 00:20:03.084 ] 00:20:03.084 }' 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:03.084 14:46:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.459 "name": "raid_bdev1", 00:20:04.459 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:04.459 "strip_size_kb": 64, 00:20:04.459 "state": "online", 00:20:04.459 "raid_level": "raid5f", 00:20:04.459 "superblock": true, 00:20:04.459 "num_base_bdevs": 4, 00:20:04.459 "num_base_bdevs_discovered": 4, 00:20:04.459 "num_base_bdevs_operational": 4, 00:20:04.459 "process": { 00:20:04.459 "type": "rebuild", 00:20:04.459 "target": "spare", 00:20:04.459 "progress": { 00:20:04.459 "blocks": 109440, 00:20:04.459 "percent": 57 00:20:04.459 } 00:20:04.459 }, 00:20:04.459 "base_bdevs_list": [ 00:20:04.459 { 00:20:04.459 "name": "spare", 00:20:04.459 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:04.459 "is_configured": true, 00:20:04.459 "data_offset": 2048, 00:20:04.459 "data_size": 63488 00:20:04.459 }, 00:20:04.459 { 00:20:04.459 "name": "BaseBdev2", 00:20:04.459 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:04.459 "is_configured": true, 00:20:04.459 "data_offset": 2048, 00:20:04.459 "data_size": 63488 00:20:04.459 }, 00:20:04.459 { 00:20:04.459 "name": "BaseBdev3", 00:20:04.459 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:04.459 "is_configured": true, 00:20:04.459 "data_offset": 2048, 00:20:04.459 "data_size": 63488 00:20:04.459 }, 00:20:04.459 { 00:20:04.459 "name": "BaseBdev4", 00:20:04.459 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:04.459 "is_configured": true, 00:20:04.459 "data_offset": 2048, 00:20:04.459 "data_size": 63488 00:20:04.459 } 00:20:04.459 ] 00:20:04.459 }' 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.459 14:46:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.394 "name": "raid_bdev1", 00:20:05.394 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:05.394 "strip_size_kb": 64, 00:20:05.394 "state": "online", 00:20:05.394 "raid_level": "raid5f", 00:20:05.394 "superblock": true, 00:20:05.394 "num_base_bdevs": 4, 00:20:05.394 "num_base_bdevs_discovered": 4, 00:20:05.394 "num_base_bdevs_operational": 4, 00:20:05.394 "process": { 00:20:05.394 "type": "rebuild", 00:20:05.394 "target": "spare", 00:20:05.394 "progress": { 00:20:05.394 "blocks": 130560, 00:20:05.394 "percent": 68 00:20:05.394 } 00:20:05.394 }, 00:20:05.394 "base_bdevs_list": [ 00:20:05.394 { 00:20:05.394 "name": "spare", 00:20:05.394 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:05.394 "is_configured": true, 00:20:05.394 "data_offset": 2048, 00:20:05.394 "data_size": 63488 00:20:05.394 }, 00:20:05.394 { 00:20:05.394 "name": "BaseBdev2", 00:20:05.394 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:05.394 "is_configured": true, 00:20:05.394 "data_offset": 2048, 00:20:05.394 "data_size": 63488 00:20:05.394 }, 00:20:05.394 { 00:20:05.394 "name": "BaseBdev3", 00:20:05.394 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:05.394 "is_configured": true, 00:20:05.394 "data_offset": 2048, 00:20:05.394 "data_size": 63488 00:20:05.394 }, 00:20:05.394 { 00:20:05.394 "name": "BaseBdev4", 00:20:05.394 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:05.394 "is_configured": true, 00:20:05.394 "data_offset": 2048, 00:20:05.394 "data_size": 63488 00:20:05.394 } 00:20:05.394 ] 00:20:05.394 }' 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:05.394 14:46:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.769 "name": "raid_bdev1", 00:20:06.769 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:06.769 "strip_size_kb": 64, 00:20:06.769 "state": "online", 00:20:06.769 "raid_level": "raid5f", 00:20:06.769 "superblock": true, 00:20:06.769 "num_base_bdevs": 4, 00:20:06.769 "num_base_bdevs_discovered": 4, 00:20:06.769 "num_base_bdevs_operational": 4, 00:20:06.769 "process": { 00:20:06.769 "type": "rebuild", 00:20:06.769 "target": "spare", 00:20:06.769 "progress": { 00:20:06.769 "blocks": 153600, 00:20:06.769 "percent": 80 00:20:06.769 } 00:20:06.769 }, 00:20:06.769 "base_bdevs_list": [ 00:20:06.769 { 00:20:06.769 "name": "spare", 00:20:06.769 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:06.769 "is_configured": true, 00:20:06.769 "data_offset": 2048, 00:20:06.769 "data_size": 63488 00:20:06.769 }, 00:20:06.769 { 00:20:06.769 "name": "BaseBdev2", 00:20:06.769 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:06.769 "is_configured": true, 00:20:06.769 "data_offset": 2048, 00:20:06.769 "data_size": 63488 00:20:06.769 }, 00:20:06.769 { 00:20:06.769 "name": "BaseBdev3", 00:20:06.769 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:06.769 "is_configured": true, 00:20:06.769 "data_offset": 2048, 00:20:06.769 "data_size": 63488 00:20:06.769 }, 00:20:06.769 { 00:20:06.769 "name": "BaseBdev4", 00:20:06.769 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:06.769 "is_configured": true, 00:20:06.769 "data_offset": 2048, 00:20:06.769 "data_size": 63488 00:20:06.769 } 00:20:06.769 ] 00:20:06.769 }' 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.769 14:46:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.755 "name": "raid_bdev1", 00:20:07.755 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:07.755 "strip_size_kb": 64, 00:20:07.755 "state": "online", 00:20:07.755 "raid_level": "raid5f", 00:20:07.755 "superblock": true, 00:20:07.755 "num_base_bdevs": 4, 00:20:07.755 "num_base_bdevs_discovered": 4, 00:20:07.755 "num_base_bdevs_operational": 4, 00:20:07.755 "process": { 00:20:07.755 "type": "rebuild", 00:20:07.755 "target": "spare", 00:20:07.755 "progress": { 00:20:07.755 "blocks": 174720, 00:20:07.755 "percent": 91 00:20:07.755 } 00:20:07.755 }, 00:20:07.755 "base_bdevs_list": [ 00:20:07.755 { 00:20:07.755 "name": "spare", 00:20:07.755 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:07.755 "is_configured": true, 00:20:07.755 "data_offset": 2048, 00:20:07.755 "data_size": 63488 00:20:07.755 }, 00:20:07.755 { 00:20:07.755 "name": "BaseBdev2", 00:20:07.755 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:07.755 "is_configured": true, 00:20:07.755 "data_offset": 2048, 00:20:07.755 "data_size": 63488 00:20:07.755 }, 00:20:07.755 { 00:20:07.755 "name": "BaseBdev3", 00:20:07.755 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:07.755 "is_configured": true, 00:20:07.755 "data_offset": 2048, 00:20:07.755 "data_size": 63488 00:20:07.755 }, 00:20:07.755 { 00:20:07.755 "name": "BaseBdev4", 00:20:07.755 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:07.755 "is_configured": true, 00:20:07.755 "data_offset": 2048, 00:20:07.755 "data_size": 63488 00:20:07.755 } 00:20:07.755 ] 00:20:07.755 }' 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.755 14:46:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:08.690 [2024-11-04 14:46:07.500931] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:08.690 [2024-11-04 14:46:07.501046] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:08.690 [2024-11-04 14:46:07.501244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.949 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:08.949 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.949 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.949 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.949 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.949 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.949 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.949 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.950 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.950 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.950 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.950 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.950 "name": "raid_bdev1", 00:20:08.950 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:08.950 "strip_size_kb": 64, 00:20:08.950 "state": "online", 00:20:08.950 "raid_level": "raid5f", 00:20:08.950 "superblock": true, 00:20:08.950 "num_base_bdevs": 4, 00:20:08.950 "num_base_bdevs_discovered": 4, 00:20:08.950 "num_base_bdevs_operational": 4, 00:20:08.950 "base_bdevs_list": [ 00:20:08.950 { 00:20:08.950 "name": "spare", 00:20:08.950 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:08.950 "is_configured": true, 00:20:08.950 "data_offset": 2048, 00:20:08.950 "data_size": 63488 00:20:08.950 }, 00:20:08.950 { 00:20:08.950 "name": "BaseBdev2", 00:20:08.950 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:08.950 "is_configured": true, 00:20:08.950 "data_offset": 2048, 00:20:08.950 "data_size": 63488 00:20:08.950 }, 00:20:08.950 { 00:20:08.950 "name": "BaseBdev3", 00:20:08.950 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:08.950 "is_configured": true, 00:20:08.950 "data_offset": 2048, 00:20:08.950 "data_size": 63488 00:20:08.950 }, 00:20:08.950 { 00:20:08.950 "name": "BaseBdev4", 00:20:08.950 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:08.950 "is_configured": true, 00:20:08.950 "data_offset": 2048, 00:20:08.950 "data_size": 63488 00:20:08.950 } 00:20:08.950 ] 00:20:08.950 }' 00:20:08.950 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.950 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:08.950 14:46:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.950 "name": "raid_bdev1", 00:20:08.950 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:08.950 "strip_size_kb": 64, 00:20:08.950 "state": "online", 00:20:08.950 "raid_level": "raid5f", 00:20:08.950 "superblock": true, 00:20:08.950 "num_base_bdevs": 4, 00:20:08.950 "num_base_bdevs_discovered": 4, 00:20:08.950 "num_base_bdevs_operational": 4, 00:20:08.950 "base_bdevs_list": [ 00:20:08.950 { 00:20:08.950 "name": "spare", 00:20:08.950 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:08.950 "is_configured": true, 00:20:08.950 "data_offset": 2048, 00:20:08.950 "data_size": 63488 00:20:08.950 }, 00:20:08.950 { 00:20:08.950 "name": "BaseBdev2", 00:20:08.950 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:08.950 "is_configured": true, 00:20:08.950 "data_offset": 2048, 00:20:08.950 "data_size": 63488 00:20:08.950 }, 00:20:08.950 { 00:20:08.950 "name": "BaseBdev3", 00:20:08.950 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:08.950 "is_configured": true, 00:20:08.950 "data_offset": 2048, 00:20:08.950 "data_size": 63488 00:20:08.950 }, 00:20:08.950 { 00:20:08.950 "name": "BaseBdev4", 00:20:08.950 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:08.950 "is_configured": true, 00:20:08.950 "data_offset": 2048, 00:20:08.950 "data_size": 63488 00:20:08.950 } 00:20:08.950 ] 00:20:08.950 }' 00:20:08.950 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.208 "name": "raid_bdev1", 00:20:09.208 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:09.208 "strip_size_kb": 64, 00:20:09.208 "state": "online", 00:20:09.208 "raid_level": "raid5f", 00:20:09.208 "superblock": true, 00:20:09.208 "num_base_bdevs": 4, 00:20:09.208 "num_base_bdevs_discovered": 4, 00:20:09.208 "num_base_bdevs_operational": 4, 00:20:09.208 "base_bdevs_list": [ 00:20:09.208 { 00:20:09.208 "name": "spare", 00:20:09.208 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:09.208 "is_configured": true, 00:20:09.208 "data_offset": 2048, 00:20:09.208 "data_size": 63488 00:20:09.208 }, 00:20:09.208 { 00:20:09.208 "name": "BaseBdev2", 00:20:09.208 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:09.208 "is_configured": true, 00:20:09.208 "data_offset": 2048, 00:20:09.208 "data_size": 63488 00:20:09.208 }, 00:20:09.208 { 00:20:09.208 "name": "BaseBdev3", 00:20:09.208 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:09.208 "is_configured": true, 00:20:09.208 "data_offset": 2048, 00:20:09.208 "data_size": 63488 00:20:09.208 }, 00:20:09.208 { 00:20:09.208 "name": "BaseBdev4", 00:20:09.208 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:09.208 "is_configured": true, 00:20:09.208 "data_offset": 2048, 00:20:09.208 "data_size": 63488 00:20:09.208 } 00:20:09.208 ] 00:20:09.208 }' 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.208 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.775 [2024-11-04 14:46:08.680154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.775 [2024-11-04 14:46:08.680331] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.775 [2024-11-04 14:46:08.680447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.775 [2024-11-04 14:46:08.680570] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.775 [2024-11-04 14:46:08.680602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:09.775 14:46:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:10.033 /dev/nbd0 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:10.033 1+0 records in 00:20:10.033 1+0 records out 00:20:10.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473802 s, 8.6 MB/s 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:10.033 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:10.291 /dev/nbd1 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:10.291 1+0 records in 00:20:10.291 1+0 records out 00:20:10.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400681 s, 10.2 MB/s 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:10.291 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:20:10.292 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:10.292 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:10.292 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:10.550 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:10.550 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:10.550 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:10.550 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:10.550 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:20:10.550 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:10.550 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:10.807 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:10.807 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:10.807 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:10.807 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:10.807 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:10.807 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:10.807 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:10.807 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:10.807 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:10.807 14:46:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.065 [2024-11-04 14:46:10.116949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:11.065 [2024-11-04 14:46:10.117008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:11.065 [2024-11-04 14:46:10.117042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:11.065 [2024-11-04 14:46:10.117057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:11.065 [2024-11-04 14:46:10.119906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:11.065 [2024-11-04 14:46:10.119962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:11.065 [2024-11-04 14:46:10.120080] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:11.065 [2024-11-04 14:46:10.120146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:11.065 [2024-11-04 14:46:10.120315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:11.065 [2024-11-04 14:46:10.120456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.065 [2024-11-04 14:46:10.120580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:11.065 spare 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.065 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.323 [2024-11-04 14:46:10.220713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:11.323 [2024-11-04 14:46:10.220768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:11.323 [2024-11-04 14:46:10.221152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:20:11.323 [2024-11-04 14:46:10.227476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:11.323 [2024-11-04 14:46:10.227507] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:11.323 [2024-11-04 14:46:10.227744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.323 "name": "raid_bdev1", 00:20:11.323 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:11.323 "strip_size_kb": 64, 00:20:11.323 "state": "online", 00:20:11.323 "raid_level": "raid5f", 00:20:11.323 "superblock": true, 00:20:11.323 "num_base_bdevs": 4, 00:20:11.323 "num_base_bdevs_discovered": 4, 00:20:11.323 "num_base_bdevs_operational": 4, 00:20:11.323 "base_bdevs_list": [ 00:20:11.323 { 00:20:11.323 "name": "spare", 00:20:11.323 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:11.323 "is_configured": true, 00:20:11.323 "data_offset": 2048, 00:20:11.323 "data_size": 63488 00:20:11.323 }, 00:20:11.323 { 00:20:11.323 "name": "BaseBdev2", 00:20:11.323 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:11.323 "is_configured": true, 00:20:11.323 "data_offset": 2048, 00:20:11.323 "data_size": 63488 00:20:11.323 }, 00:20:11.323 { 00:20:11.323 "name": "BaseBdev3", 00:20:11.323 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:11.323 "is_configured": true, 00:20:11.323 "data_offset": 2048, 00:20:11.323 "data_size": 63488 00:20:11.323 }, 00:20:11.323 { 00:20:11.323 "name": "BaseBdev4", 00:20:11.323 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:11.323 "is_configured": true, 00:20:11.323 "data_offset": 2048, 00:20:11.323 "data_size": 63488 00:20:11.323 } 00:20:11.323 ] 00:20:11.323 }' 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.323 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:11.890 "name": "raid_bdev1", 00:20:11.890 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:11.890 "strip_size_kb": 64, 00:20:11.890 "state": "online", 00:20:11.890 "raid_level": "raid5f", 00:20:11.890 "superblock": true, 00:20:11.890 "num_base_bdevs": 4, 00:20:11.890 "num_base_bdevs_discovered": 4, 00:20:11.890 "num_base_bdevs_operational": 4, 00:20:11.890 "base_bdevs_list": [ 00:20:11.890 { 00:20:11.890 "name": "spare", 00:20:11.890 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:11.890 "is_configured": true, 00:20:11.890 "data_offset": 2048, 00:20:11.890 "data_size": 63488 00:20:11.890 }, 00:20:11.890 { 00:20:11.890 "name": "BaseBdev2", 00:20:11.890 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:11.890 "is_configured": true, 00:20:11.890 "data_offset": 2048, 00:20:11.890 "data_size": 63488 00:20:11.890 }, 00:20:11.890 { 00:20:11.890 "name": "BaseBdev3", 00:20:11.890 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:11.890 "is_configured": true, 00:20:11.890 "data_offset": 2048, 00:20:11.890 "data_size": 63488 00:20:11.890 }, 00:20:11.890 { 00:20:11.890 "name": "BaseBdev4", 00:20:11.890 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:11.890 "is_configured": true, 00:20:11.890 "data_offset": 2048, 00:20:11.890 "data_size": 63488 00:20:11.890 } 00:20:11.890 ] 00:20:11.890 }' 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.890 [2024-11-04 14:46:10.903258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.890 "name": "raid_bdev1", 00:20:11.890 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:11.890 "strip_size_kb": 64, 00:20:11.890 "state": "online", 00:20:11.890 "raid_level": "raid5f", 00:20:11.890 "superblock": true, 00:20:11.890 "num_base_bdevs": 4, 00:20:11.890 "num_base_bdevs_discovered": 3, 00:20:11.890 "num_base_bdevs_operational": 3, 00:20:11.890 "base_bdevs_list": [ 00:20:11.890 { 00:20:11.890 "name": null, 00:20:11.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.890 "is_configured": false, 00:20:11.890 "data_offset": 0, 00:20:11.890 "data_size": 63488 00:20:11.890 }, 00:20:11.890 { 00:20:11.890 "name": "BaseBdev2", 00:20:11.890 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:11.890 "is_configured": true, 00:20:11.890 "data_offset": 2048, 00:20:11.890 "data_size": 63488 00:20:11.890 }, 00:20:11.890 { 00:20:11.890 "name": "BaseBdev3", 00:20:11.890 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:11.890 "is_configured": true, 00:20:11.890 "data_offset": 2048, 00:20:11.890 "data_size": 63488 00:20:11.890 }, 00:20:11.890 { 00:20:11.890 "name": "BaseBdev4", 00:20:11.890 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:11.890 "is_configured": true, 00:20:11.890 "data_offset": 2048, 00:20:11.890 "data_size": 63488 00:20:11.890 } 00:20:11.890 ] 00:20:11.890 }' 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.890 14:46:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.458 14:46:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:12.458 14:46:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.458 14:46:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.458 [2024-11-04 14:46:11.415413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:12.458 [2024-11-04 14:46:11.415639] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:12.458 [2024-11-04 14:46:11.415681] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:12.458 [2024-11-04 14:46:11.415724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:12.458 [2024-11-04 14:46:11.429006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:20:12.458 14:46:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.458 14:46:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:12.458 [2024-11-04 14:46:11.437826] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:13.394 "name": "raid_bdev1", 00:20:13.394 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:13.394 "strip_size_kb": 64, 00:20:13.394 "state": "online", 00:20:13.394 "raid_level": "raid5f", 00:20:13.394 "superblock": true, 00:20:13.394 "num_base_bdevs": 4, 00:20:13.394 "num_base_bdevs_discovered": 4, 00:20:13.394 "num_base_bdevs_operational": 4, 00:20:13.394 "process": { 00:20:13.394 "type": "rebuild", 00:20:13.394 "target": "spare", 00:20:13.394 "progress": { 00:20:13.394 "blocks": 17280, 00:20:13.394 "percent": 9 00:20:13.394 } 00:20:13.394 }, 00:20:13.394 "base_bdevs_list": [ 00:20:13.394 { 00:20:13.394 "name": "spare", 00:20:13.394 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:13.394 "is_configured": true, 00:20:13.394 "data_offset": 2048, 00:20:13.394 "data_size": 63488 00:20:13.394 }, 00:20:13.394 { 00:20:13.394 "name": "BaseBdev2", 00:20:13.394 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:13.394 "is_configured": true, 00:20:13.394 "data_offset": 2048, 00:20:13.394 "data_size": 63488 00:20:13.394 }, 00:20:13.394 { 00:20:13.394 "name": "BaseBdev3", 00:20:13.394 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:13.394 "is_configured": true, 00:20:13.394 "data_offset": 2048, 00:20:13.394 "data_size": 63488 00:20:13.394 }, 00:20:13.394 { 00:20:13.394 "name": "BaseBdev4", 00:20:13.394 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:13.394 "is_configured": true, 00:20:13.394 "data_offset": 2048, 00:20:13.394 "data_size": 63488 00:20:13.394 } 00:20:13.394 ] 00:20:13.394 }' 00:20:13.394 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:13.652 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.652 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.653 [2024-11-04 14:46:12.583589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.653 [2024-11-04 14:46:12.650696] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:13.653 [2024-11-04 14:46:12.650854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.653 [2024-11-04 14:46:12.650885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.653 [2024-11-04 14:46:12.650906] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:13.653 "name": "raid_bdev1", 00:20:13.653 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:13.653 "strip_size_kb": 64, 00:20:13.653 "state": "online", 00:20:13.653 "raid_level": "raid5f", 00:20:13.653 "superblock": true, 00:20:13.653 "num_base_bdevs": 4, 00:20:13.653 "num_base_bdevs_discovered": 3, 00:20:13.653 "num_base_bdevs_operational": 3, 00:20:13.653 "base_bdevs_list": [ 00:20:13.653 { 00:20:13.653 "name": null, 00:20:13.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.653 "is_configured": false, 00:20:13.653 "data_offset": 0, 00:20:13.653 "data_size": 63488 00:20:13.653 }, 00:20:13.653 { 00:20:13.653 "name": "BaseBdev2", 00:20:13.653 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:13.653 "is_configured": true, 00:20:13.653 "data_offset": 2048, 00:20:13.653 "data_size": 63488 00:20:13.653 }, 00:20:13.653 { 00:20:13.653 "name": "BaseBdev3", 00:20:13.653 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:13.653 "is_configured": true, 00:20:13.653 "data_offset": 2048, 00:20:13.653 "data_size": 63488 00:20:13.653 }, 00:20:13.653 { 00:20:13.653 "name": "BaseBdev4", 00:20:13.653 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:13.653 "is_configured": true, 00:20:13.653 "data_offset": 2048, 00:20:13.653 "data_size": 63488 00:20:13.653 } 00:20:13.653 ] 00:20:13.653 }' 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:13.653 14:46:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.220 14:46:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:14.220 14:46:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.220 14:46:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:14.220 [2024-11-04 14:46:13.178069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:14.220 [2024-11-04 14:46:13.178153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.220 [2024-11-04 14:46:13.178193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:14.220 [2024-11-04 14:46:13.178227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.220 [2024-11-04 14:46:13.178837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.220 [2024-11-04 14:46:13.178878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:14.220 [2024-11-04 14:46:13.179011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:14.220 [2024-11-04 14:46:13.179047] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:14.220 [2024-11-04 14:46:13.179061] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:14.220 [2024-11-04 14:46:13.179099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:14.220 [2024-11-04 14:46:13.192309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:20:14.220 spare 00:20:14.220 14:46:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.220 14:46:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:14.220 [2024-11-04 14:46:13.200974] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.157 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.157 "name": "raid_bdev1", 00:20:15.157 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:15.157 "strip_size_kb": 64, 00:20:15.157 "state": "online", 00:20:15.157 "raid_level": "raid5f", 00:20:15.157 "superblock": true, 00:20:15.157 "num_base_bdevs": 4, 00:20:15.157 "num_base_bdevs_discovered": 4, 00:20:15.157 "num_base_bdevs_operational": 4, 00:20:15.157 "process": { 00:20:15.157 "type": "rebuild", 00:20:15.157 "target": "spare", 00:20:15.157 "progress": { 00:20:15.157 "blocks": 17280, 00:20:15.157 "percent": 9 00:20:15.157 } 00:20:15.157 }, 00:20:15.157 "base_bdevs_list": [ 00:20:15.157 { 00:20:15.157 "name": "spare", 00:20:15.157 "uuid": "43f36880-7860-53c7-bb01-6064e34d4145", 00:20:15.157 "is_configured": true, 00:20:15.157 "data_offset": 2048, 00:20:15.157 "data_size": 63488 00:20:15.157 }, 00:20:15.157 { 00:20:15.157 "name": "BaseBdev2", 00:20:15.157 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:15.157 "is_configured": true, 00:20:15.157 "data_offset": 2048, 00:20:15.157 "data_size": 63488 00:20:15.157 }, 00:20:15.157 { 00:20:15.157 "name": "BaseBdev3", 00:20:15.157 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:15.157 "is_configured": true, 00:20:15.158 "data_offset": 2048, 00:20:15.158 "data_size": 63488 00:20:15.158 }, 00:20:15.158 { 00:20:15.158 "name": "BaseBdev4", 00:20:15.158 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:15.158 "is_configured": true, 00:20:15.158 "data_offset": 2048, 00:20:15.158 "data_size": 63488 00:20:15.158 } 00:20:15.158 ] 00:20:15.158 }' 00:20:15.158 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.416 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:15.416 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.416 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:15.416 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:15.416 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.416 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.416 [2024-11-04 14:46:14.362869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.416 [2024-11-04 14:46:14.413683] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:15.416 [2024-11-04 14:46:14.413788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.416 [2024-11-04 14:46:14.413820] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.416 [2024-11-04 14:46:14.413833] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:15.416 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.416 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.417 "name": "raid_bdev1", 00:20:15.417 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:15.417 "strip_size_kb": 64, 00:20:15.417 "state": "online", 00:20:15.417 "raid_level": "raid5f", 00:20:15.417 "superblock": true, 00:20:15.417 "num_base_bdevs": 4, 00:20:15.417 "num_base_bdevs_discovered": 3, 00:20:15.417 "num_base_bdevs_operational": 3, 00:20:15.417 "base_bdevs_list": [ 00:20:15.417 { 00:20:15.417 "name": null, 00:20:15.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.417 "is_configured": false, 00:20:15.417 "data_offset": 0, 00:20:15.417 "data_size": 63488 00:20:15.417 }, 00:20:15.417 { 00:20:15.417 "name": "BaseBdev2", 00:20:15.417 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:15.417 "is_configured": true, 00:20:15.417 "data_offset": 2048, 00:20:15.417 "data_size": 63488 00:20:15.417 }, 00:20:15.417 { 00:20:15.417 "name": "BaseBdev3", 00:20:15.417 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:15.417 "is_configured": true, 00:20:15.417 "data_offset": 2048, 00:20:15.417 "data_size": 63488 00:20:15.417 }, 00:20:15.417 { 00:20:15.417 "name": "BaseBdev4", 00:20:15.417 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:15.417 "is_configured": true, 00:20:15.417 "data_offset": 2048, 00:20:15.417 "data_size": 63488 00:20:15.417 } 00:20:15.417 ] 00:20:15.417 }' 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.417 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:15.983 "name": "raid_bdev1", 00:20:15.983 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:15.983 "strip_size_kb": 64, 00:20:15.983 "state": "online", 00:20:15.983 "raid_level": "raid5f", 00:20:15.983 "superblock": true, 00:20:15.983 "num_base_bdevs": 4, 00:20:15.983 "num_base_bdevs_discovered": 3, 00:20:15.983 "num_base_bdevs_operational": 3, 00:20:15.983 "base_bdevs_list": [ 00:20:15.983 { 00:20:15.983 "name": null, 00:20:15.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.983 "is_configured": false, 00:20:15.983 "data_offset": 0, 00:20:15.983 "data_size": 63488 00:20:15.983 }, 00:20:15.983 { 00:20:15.983 "name": "BaseBdev2", 00:20:15.983 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:15.983 "is_configured": true, 00:20:15.983 "data_offset": 2048, 00:20:15.983 "data_size": 63488 00:20:15.983 }, 00:20:15.983 { 00:20:15.983 "name": "BaseBdev3", 00:20:15.983 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:15.983 "is_configured": true, 00:20:15.983 "data_offset": 2048, 00:20:15.983 "data_size": 63488 00:20:15.983 }, 00:20:15.983 { 00:20:15.983 "name": "BaseBdev4", 00:20:15.983 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:15.983 "is_configured": true, 00:20:15.983 "data_offset": 2048, 00:20:15.983 "data_size": 63488 00:20:15.983 } 00:20:15.983 ] 00:20:15.983 }' 00:20:15.983 14:46:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.983 [2024-11-04 14:46:15.096783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:15.983 [2024-11-04 14:46:15.096850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.983 [2024-11-04 14:46:15.096883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:20:15.983 [2024-11-04 14:46:15.096898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.983 [2024-11-04 14:46:15.097488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.983 [2024-11-04 14:46:15.097525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:15.983 [2024-11-04 14:46:15.097631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:15.983 [2024-11-04 14:46:15.097653] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:15.983 [2024-11-04 14:46:15.097671] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:15.983 [2024-11-04 14:46:15.097685] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:15.983 BaseBdev1 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.983 14:46:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:17.390 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:17.390 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.390 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.391 "name": "raid_bdev1", 00:20:17.391 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:17.391 "strip_size_kb": 64, 00:20:17.391 "state": "online", 00:20:17.391 "raid_level": "raid5f", 00:20:17.391 "superblock": true, 00:20:17.391 "num_base_bdevs": 4, 00:20:17.391 "num_base_bdevs_discovered": 3, 00:20:17.391 "num_base_bdevs_operational": 3, 00:20:17.391 "base_bdevs_list": [ 00:20:17.391 { 00:20:17.391 "name": null, 00:20:17.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.391 "is_configured": false, 00:20:17.391 "data_offset": 0, 00:20:17.391 "data_size": 63488 00:20:17.391 }, 00:20:17.391 { 00:20:17.391 "name": "BaseBdev2", 00:20:17.391 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:17.391 "is_configured": true, 00:20:17.391 "data_offset": 2048, 00:20:17.391 "data_size": 63488 00:20:17.391 }, 00:20:17.391 { 00:20:17.391 "name": "BaseBdev3", 00:20:17.391 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:17.391 "is_configured": true, 00:20:17.391 "data_offset": 2048, 00:20:17.391 "data_size": 63488 00:20:17.391 }, 00:20:17.391 { 00:20:17.391 "name": "BaseBdev4", 00:20:17.391 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:17.391 "is_configured": true, 00:20:17.391 "data_offset": 2048, 00:20:17.391 "data_size": 63488 00:20:17.391 } 00:20:17.391 ] 00:20:17.391 }' 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.391 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.649 "name": "raid_bdev1", 00:20:17.649 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:17.649 "strip_size_kb": 64, 00:20:17.649 "state": "online", 00:20:17.649 "raid_level": "raid5f", 00:20:17.649 "superblock": true, 00:20:17.649 "num_base_bdevs": 4, 00:20:17.649 "num_base_bdevs_discovered": 3, 00:20:17.649 "num_base_bdevs_operational": 3, 00:20:17.649 "base_bdevs_list": [ 00:20:17.649 { 00:20:17.649 "name": null, 00:20:17.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.649 "is_configured": false, 00:20:17.649 "data_offset": 0, 00:20:17.649 "data_size": 63488 00:20:17.649 }, 00:20:17.649 { 00:20:17.649 "name": "BaseBdev2", 00:20:17.649 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:17.649 "is_configured": true, 00:20:17.649 "data_offset": 2048, 00:20:17.649 "data_size": 63488 00:20:17.649 }, 00:20:17.649 { 00:20:17.649 "name": "BaseBdev3", 00:20:17.649 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:17.649 "is_configured": true, 00:20:17.649 "data_offset": 2048, 00:20:17.649 "data_size": 63488 00:20:17.649 }, 00:20:17.649 { 00:20:17.649 "name": "BaseBdev4", 00:20:17.649 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:17.649 "is_configured": true, 00:20:17.649 "data_offset": 2048, 00:20:17.649 "data_size": 63488 00:20:17.649 } 00:20:17.649 ] 00:20:17.649 }' 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:17.649 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:20:17.650 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:17.650 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:17.650 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:17.650 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:17.650 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:17.650 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:17.650 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.650 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.650 [2024-11-04 14:46:16.769291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.650 [2024-11-04 14:46:16.769494] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:17.650 [2024-11-04 14:46:16.769522] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:17.908 request: 00:20:17.908 { 00:20:17.908 "base_bdev": "BaseBdev1", 00:20:17.908 "raid_bdev": "raid_bdev1", 00:20:17.908 "method": "bdev_raid_add_base_bdev", 00:20:17.908 "req_id": 1 00:20:17.908 } 00:20:17.908 Got JSON-RPC error response 00:20:17.908 response: 00:20:17.908 { 00:20:17.908 "code": -22, 00:20:17.908 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:17.908 } 00:20:17.908 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:17.908 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:20:17.908 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:17.908 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:17.908 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:17.908 14:46:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.860 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.860 "name": "raid_bdev1", 00:20:18.860 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:18.860 "strip_size_kb": 64, 00:20:18.860 "state": "online", 00:20:18.860 "raid_level": "raid5f", 00:20:18.860 "superblock": true, 00:20:18.860 "num_base_bdevs": 4, 00:20:18.860 "num_base_bdevs_discovered": 3, 00:20:18.860 "num_base_bdevs_operational": 3, 00:20:18.860 "base_bdevs_list": [ 00:20:18.860 { 00:20:18.860 "name": null, 00:20:18.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.860 "is_configured": false, 00:20:18.860 "data_offset": 0, 00:20:18.860 "data_size": 63488 00:20:18.860 }, 00:20:18.860 { 00:20:18.860 "name": "BaseBdev2", 00:20:18.860 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:18.860 "is_configured": true, 00:20:18.860 "data_offset": 2048, 00:20:18.860 "data_size": 63488 00:20:18.860 }, 00:20:18.860 { 00:20:18.860 "name": "BaseBdev3", 00:20:18.860 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:18.860 "is_configured": true, 00:20:18.860 "data_offset": 2048, 00:20:18.860 "data_size": 63488 00:20:18.860 }, 00:20:18.860 { 00:20:18.860 "name": "BaseBdev4", 00:20:18.860 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:18.860 "is_configured": true, 00:20:18.860 "data_offset": 2048, 00:20:18.860 "data_size": 63488 00:20:18.860 } 00:20:18.860 ] 00:20:18.860 }' 00:20:18.861 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.861 14:46:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.428 "name": "raid_bdev1", 00:20:19.428 "uuid": "d975524f-37e6-4da3-9beb-2f0978301a8e", 00:20:19.428 "strip_size_kb": 64, 00:20:19.428 "state": "online", 00:20:19.428 "raid_level": "raid5f", 00:20:19.428 "superblock": true, 00:20:19.428 "num_base_bdevs": 4, 00:20:19.428 "num_base_bdevs_discovered": 3, 00:20:19.428 "num_base_bdevs_operational": 3, 00:20:19.428 "base_bdevs_list": [ 00:20:19.428 { 00:20:19.428 "name": null, 00:20:19.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.428 "is_configured": false, 00:20:19.428 "data_offset": 0, 00:20:19.428 "data_size": 63488 00:20:19.428 }, 00:20:19.428 { 00:20:19.428 "name": "BaseBdev2", 00:20:19.428 "uuid": "c75fb840-5b57-59d0-b48f-c6b83a18b82c", 00:20:19.428 "is_configured": true, 00:20:19.428 "data_offset": 2048, 00:20:19.428 "data_size": 63488 00:20:19.428 }, 00:20:19.428 { 00:20:19.428 "name": "BaseBdev3", 00:20:19.428 "uuid": "3ffdfa36-3ebd-5efb-b0e1-4fb69813832d", 00:20:19.428 "is_configured": true, 00:20:19.428 "data_offset": 2048, 00:20:19.428 "data_size": 63488 00:20:19.428 }, 00:20:19.428 { 00:20:19.428 "name": "BaseBdev4", 00:20:19.428 "uuid": "87720334-1434-59a4-8268-1886adc0a771", 00:20:19.428 "is_configured": true, 00:20:19.428 "data_offset": 2048, 00:20:19.428 "data_size": 63488 00:20:19.428 } 00:20:19.428 ] 00:20:19.428 }' 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85519 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85519 ']' 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85519 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85519 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:19.428 killing process with pid 85519 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85519' 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85519 00:20:19.428 Received shutdown signal, test time was about 60.000000 seconds 00:20:19.428 00:20:19.428 Latency(us) 00:20:19.428 [2024-11-04T14:46:18.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.428 [2024-11-04T14:46:18.551Z] =================================================================================================================== 00:20:19.428 [2024-11-04T14:46:18.551Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:19.428 14:46:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85519 00:20:19.428 [2024-11-04 14:46:18.513005] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.428 [2024-11-04 14:46:18.513184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.428 [2024-11-04 14:46:18.513300] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.428 [2024-11-04 14:46:18.513345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:19.995 [2024-11-04 14:46:18.993136] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.930 14:46:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:20.930 00:20:20.930 real 0m28.456s 00:20:20.930 user 0m36.989s 00:20:20.930 sys 0m2.794s 00:20:20.930 14:46:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:20.930 ************************************ 00:20:20.930 END TEST raid5f_rebuild_test_sb 00:20:20.930 14:46:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.930 ************************************ 00:20:21.189 14:46:20 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:20:21.189 14:46:20 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:20:21.189 14:46:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:21.189 14:46:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:21.189 14:46:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.189 ************************************ 00:20:21.189 START TEST raid_state_function_test_sb_4k 00:20:21.189 ************************************ 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86341 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86341' 00:20:21.189 Process raid pid: 86341 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86341 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86341 ']' 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:21.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:21.189 14:46:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:21.189 [2024-11-04 14:46:20.189783] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:20:21.189 [2024-11-04 14:46:20.189983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.449 [2024-11-04 14:46:20.376686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.449 [2024-11-04 14:46:20.504753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.708 [2024-11-04 14:46:20.708717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.708 [2024-11-04 14:46:20.708767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.274 [2024-11-04 14:46:21.129792] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:22.274 [2024-11-04 14:46:21.129856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:22.274 [2024-11-04 14:46:21.129872] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:22.274 [2024-11-04 14:46:21.129888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.274 "name": "Existed_Raid", 00:20:22.274 "uuid": "2a9c516e-53d0-4576-b369-59be4a1abc6e", 00:20:22.274 "strip_size_kb": 0, 00:20:22.274 "state": "configuring", 00:20:22.274 "raid_level": "raid1", 00:20:22.274 "superblock": true, 00:20:22.274 "num_base_bdevs": 2, 00:20:22.274 "num_base_bdevs_discovered": 0, 00:20:22.274 "num_base_bdevs_operational": 2, 00:20:22.274 "base_bdevs_list": [ 00:20:22.274 { 00:20:22.274 "name": "BaseBdev1", 00:20:22.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.274 "is_configured": false, 00:20:22.274 "data_offset": 0, 00:20:22.274 "data_size": 0 00:20:22.274 }, 00:20:22.274 { 00:20:22.274 "name": "BaseBdev2", 00:20:22.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.274 "is_configured": false, 00:20:22.274 "data_offset": 0, 00:20:22.274 "data_size": 0 00:20:22.274 } 00:20:22.274 ] 00:20:22.274 }' 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.274 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.533 [2024-11-04 14:46:21.597866] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:22.533 [2024-11-04 14:46:21.597910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.533 [2024-11-04 14:46:21.605848] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:22.533 [2024-11-04 14:46:21.605899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:22.533 [2024-11-04 14:46:21.605913] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:22.533 [2024-11-04 14:46:21.605945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.533 [2024-11-04 14:46:21.650327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:22.533 BaseBdev1 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.533 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.791 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.791 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:22.791 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.791 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.791 [ 00:20:22.791 { 00:20:22.791 "name": "BaseBdev1", 00:20:22.791 "aliases": [ 00:20:22.791 "1c203576-22df-4ebc-8c03-1dc2db6f71b7" 00:20:22.791 ], 00:20:22.791 "product_name": "Malloc disk", 00:20:22.791 "block_size": 4096, 00:20:22.791 "num_blocks": 8192, 00:20:22.791 "uuid": "1c203576-22df-4ebc-8c03-1dc2db6f71b7", 00:20:22.791 "assigned_rate_limits": { 00:20:22.791 "rw_ios_per_sec": 0, 00:20:22.791 "rw_mbytes_per_sec": 0, 00:20:22.791 "r_mbytes_per_sec": 0, 00:20:22.791 "w_mbytes_per_sec": 0 00:20:22.792 }, 00:20:22.792 "claimed": true, 00:20:22.792 "claim_type": "exclusive_write", 00:20:22.792 "zoned": false, 00:20:22.792 "supported_io_types": { 00:20:22.792 "read": true, 00:20:22.792 "write": true, 00:20:22.792 "unmap": true, 00:20:22.792 "flush": true, 00:20:22.792 "reset": true, 00:20:22.792 "nvme_admin": false, 00:20:22.792 "nvme_io": false, 00:20:22.792 "nvme_io_md": false, 00:20:22.792 "write_zeroes": true, 00:20:22.792 "zcopy": true, 00:20:22.792 "get_zone_info": false, 00:20:22.792 "zone_management": false, 00:20:22.792 "zone_append": false, 00:20:22.792 "compare": false, 00:20:22.792 "compare_and_write": false, 00:20:22.792 "abort": true, 00:20:22.792 "seek_hole": false, 00:20:22.792 "seek_data": false, 00:20:22.792 "copy": true, 00:20:22.792 "nvme_iov_md": false 00:20:22.792 }, 00:20:22.792 "memory_domains": [ 00:20:22.792 { 00:20:22.792 "dma_device_id": "system", 00:20:22.792 "dma_device_type": 1 00:20:22.792 }, 00:20:22.792 { 00:20:22.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.792 "dma_device_type": 2 00:20:22.792 } 00:20:22.792 ], 00:20:22.792 "driver_specific": {} 00:20:22.792 } 00:20:22.792 ] 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.792 "name": "Existed_Raid", 00:20:22.792 "uuid": "48fd3701-b5c7-4614-9c57-be8db54e3f1c", 00:20:22.792 "strip_size_kb": 0, 00:20:22.792 "state": "configuring", 00:20:22.792 "raid_level": "raid1", 00:20:22.792 "superblock": true, 00:20:22.792 "num_base_bdevs": 2, 00:20:22.792 "num_base_bdevs_discovered": 1, 00:20:22.792 "num_base_bdevs_operational": 2, 00:20:22.792 "base_bdevs_list": [ 00:20:22.792 { 00:20:22.792 "name": "BaseBdev1", 00:20:22.792 "uuid": "1c203576-22df-4ebc-8c03-1dc2db6f71b7", 00:20:22.792 "is_configured": true, 00:20:22.792 "data_offset": 256, 00:20:22.792 "data_size": 7936 00:20:22.792 }, 00:20:22.792 { 00:20:22.792 "name": "BaseBdev2", 00:20:22.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.792 "is_configured": false, 00:20:22.792 "data_offset": 0, 00:20:22.792 "data_size": 0 00:20:22.792 } 00:20:22.792 ] 00:20:22.792 }' 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.792 14:46:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.103 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:23.103 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.104 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.104 [2024-11-04 14:46:22.198517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:23.104 [2024-11-04 14:46:22.198582] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:23.104 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.104 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:23.104 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.104 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.104 [2024-11-04 14:46:22.206567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.104 [2024-11-04 14:46:22.208916] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:23.104 [2024-11-04 14:46:22.208983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:23.361 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.361 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:23.361 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:23.361 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:23.361 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.361 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.361 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.361 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.361 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.361 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.362 "name": "Existed_Raid", 00:20:23.362 "uuid": "71ef6b26-8fd0-4794-a35c-9a1a4623241b", 00:20:23.362 "strip_size_kb": 0, 00:20:23.362 "state": "configuring", 00:20:23.362 "raid_level": "raid1", 00:20:23.362 "superblock": true, 00:20:23.362 "num_base_bdevs": 2, 00:20:23.362 "num_base_bdevs_discovered": 1, 00:20:23.362 "num_base_bdevs_operational": 2, 00:20:23.362 "base_bdevs_list": [ 00:20:23.362 { 00:20:23.362 "name": "BaseBdev1", 00:20:23.362 "uuid": "1c203576-22df-4ebc-8c03-1dc2db6f71b7", 00:20:23.362 "is_configured": true, 00:20:23.362 "data_offset": 256, 00:20:23.362 "data_size": 7936 00:20:23.362 }, 00:20:23.362 { 00:20:23.362 "name": "BaseBdev2", 00:20:23.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.362 "is_configured": false, 00:20:23.362 "data_offset": 0, 00:20:23.362 "data_size": 0 00:20:23.362 } 00:20:23.362 ] 00:20:23.362 }' 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.362 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.619 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:20:23.619 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.619 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.877 [2024-11-04 14:46:22.752460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.877 [2024-11-04 14:46:22.752786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:23.877 [2024-11-04 14:46:22.752806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:23.877 [2024-11-04 14:46:22.753154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:23.877 BaseBdev2 00:20:23.877 [2024-11-04 14:46:22.753352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:23.877 [2024-11-04 14:46:22.753372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:23.878 [2024-11-04 14:46:22.753544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.878 [ 00:20:23.878 { 00:20:23.878 "name": "BaseBdev2", 00:20:23.878 "aliases": [ 00:20:23.878 "16a108ca-bd5c-4f89-b2ca-4c7decebe23c" 00:20:23.878 ], 00:20:23.878 "product_name": "Malloc disk", 00:20:23.878 "block_size": 4096, 00:20:23.878 "num_blocks": 8192, 00:20:23.878 "uuid": "16a108ca-bd5c-4f89-b2ca-4c7decebe23c", 00:20:23.878 "assigned_rate_limits": { 00:20:23.878 "rw_ios_per_sec": 0, 00:20:23.878 "rw_mbytes_per_sec": 0, 00:20:23.878 "r_mbytes_per_sec": 0, 00:20:23.878 "w_mbytes_per_sec": 0 00:20:23.878 }, 00:20:23.878 "claimed": true, 00:20:23.878 "claim_type": "exclusive_write", 00:20:23.878 "zoned": false, 00:20:23.878 "supported_io_types": { 00:20:23.878 "read": true, 00:20:23.878 "write": true, 00:20:23.878 "unmap": true, 00:20:23.878 "flush": true, 00:20:23.878 "reset": true, 00:20:23.878 "nvme_admin": false, 00:20:23.878 "nvme_io": false, 00:20:23.878 "nvme_io_md": false, 00:20:23.878 "write_zeroes": true, 00:20:23.878 "zcopy": true, 00:20:23.878 "get_zone_info": false, 00:20:23.878 "zone_management": false, 00:20:23.878 "zone_append": false, 00:20:23.878 "compare": false, 00:20:23.878 "compare_and_write": false, 00:20:23.878 "abort": true, 00:20:23.878 "seek_hole": false, 00:20:23.878 "seek_data": false, 00:20:23.878 "copy": true, 00:20:23.878 "nvme_iov_md": false 00:20:23.878 }, 00:20:23.878 "memory_domains": [ 00:20:23.878 { 00:20:23.878 "dma_device_id": "system", 00:20:23.878 "dma_device_type": 1 00:20:23.878 }, 00:20:23.878 { 00:20:23.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.878 "dma_device_type": 2 00:20:23.878 } 00:20:23.878 ], 00:20:23.878 "driver_specific": {} 00:20:23.878 } 00:20:23.878 ] 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.878 "name": "Existed_Raid", 00:20:23.878 "uuid": "71ef6b26-8fd0-4794-a35c-9a1a4623241b", 00:20:23.878 "strip_size_kb": 0, 00:20:23.878 "state": "online", 00:20:23.878 "raid_level": "raid1", 00:20:23.878 "superblock": true, 00:20:23.878 "num_base_bdevs": 2, 00:20:23.878 "num_base_bdevs_discovered": 2, 00:20:23.878 "num_base_bdevs_operational": 2, 00:20:23.878 "base_bdevs_list": [ 00:20:23.878 { 00:20:23.878 "name": "BaseBdev1", 00:20:23.878 "uuid": "1c203576-22df-4ebc-8c03-1dc2db6f71b7", 00:20:23.878 "is_configured": true, 00:20:23.878 "data_offset": 256, 00:20:23.878 "data_size": 7936 00:20:23.878 }, 00:20:23.878 { 00:20:23.878 "name": "BaseBdev2", 00:20:23.878 "uuid": "16a108ca-bd5c-4f89-b2ca-4c7decebe23c", 00:20:23.878 "is_configured": true, 00:20:23.878 "data_offset": 256, 00:20:23.878 "data_size": 7936 00:20:23.878 } 00:20:23.878 ] 00:20:23.878 }' 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.878 14:46:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.445 [2024-11-04 14:46:23.293048] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.445 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:24.445 "name": "Existed_Raid", 00:20:24.445 "aliases": [ 00:20:24.445 "71ef6b26-8fd0-4794-a35c-9a1a4623241b" 00:20:24.445 ], 00:20:24.445 "product_name": "Raid Volume", 00:20:24.445 "block_size": 4096, 00:20:24.445 "num_blocks": 7936, 00:20:24.445 "uuid": "71ef6b26-8fd0-4794-a35c-9a1a4623241b", 00:20:24.445 "assigned_rate_limits": { 00:20:24.445 "rw_ios_per_sec": 0, 00:20:24.445 "rw_mbytes_per_sec": 0, 00:20:24.445 "r_mbytes_per_sec": 0, 00:20:24.445 "w_mbytes_per_sec": 0 00:20:24.445 }, 00:20:24.445 "claimed": false, 00:20:24.445 "zoned": false, 00:20:24.445 "supported_io_types": { 00:20:24.445 "read": true, 00:20:24.445 "write": true, 00:20:24.445 "unmap": false, 00:20:24.445 "flush": false, 00:20:24.445 "reset": true, 00:20:24.445 "nvme_admin": false, 00:20:24.445 "nvme_io": false, 00:20:24.445 "nvme_io_md": false, 00:20:24.445 "write_zeroes": true, 00:20:24.445 "zcopy": false, 00:20:24.445 "get_zone_info": false, 00:20:24.445 "zone_management": false, 00:20:24.445 "zone_append": false, 00:20:24.445 "compare": false, 00:20:24.445 "compare_and_write": false, 00:20:24.445 "abort": false, 00:20:24.445 "seek_hole": false, 00:20:24.445 "seek_data": false, 00:20:24.445 "copy": false, 00:20:24.445 "nvme_iov_md": false 00:20:24.445 }, 00:20:24.445 "memory_domains": [ 00:20:24.445 { 00:20:24.445 "dma_device_id": "system", 00:20:24.445 "dma_device_type": 1 00:20:24.445 }, 00:20:24.445 { 00:20:24.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.445 "dma_device_type": 2 00:20:24.445 }, 00:20:24.446 { 00:20:24.446 "dma_device_id": "system", 00:20:24.446 "dma_device_type": 1 00:20:24.446 }, 00:20:24.446 { 00:20:24.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.446 "dma_device_type": 2 00:20:24.446 } 00:20:24.446 ], 00:20:24.446 "driver_specific": { 00:20:24.446 "raid": { 00:20:24.446 "uuid": "71ef6b26-8fd0-4794-a35c-9a1a4623241b", 00:20:24.446 "strip_size_kb": 0, 00:20:24.446 "state": "online", 00:20:24.446 "raid_level": "raid1", 00:20:24.446 "superblock": true, 00:20:24.446 "num_base_bdevs": 2, 00:20:24.446 "num_base_bdevs_discovered": 2, 00:20:24.446 "num_base_bdevs_operational": 2, 00:20:24.446 "base_bdevs_list": [ 00:20:24.446 { 00:20:24.446 "name": "BaseBdev1", 00:20:24.446 "uuid": "1c203576-22df-4ebc-8c03-1dc2db6f71b7", 00:20:24.446 "is_configured": true, 00:20:24.446 "data_offset": 256, 00:20:24.446 "data_size": 7936 00:20:24.446 }, 00:20:24.446 { 00:20:24.446 "name": "BaseBdev2", 00:20:24.446 "uuid": "16a108ca-bd5c-4f89-b2ca-4c7decebe23c", 00:20:24.446 "is_configured": true, 00:20:24.446 "data_offset": 256, 00:20:24.446 "data_size": 7936 00:20:24.446 } 00:20:24.446 ] 00:20:24.446 } 00:20:24.446 } 00:20:24.446 }' 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:24.446 BaseBdev2' 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.446 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.446 [2024-11-04 14:46:23.564796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.706 "name": "Existed_Raid", 00:20:24.706 "uuid": "71ef6b26-8fd0-4794-a35c-9a1a4623241b", 00:20:24.706 "strip_size_kb": 0, 00:20:24.706 "state": "online", 00:20:24.706 "raid_level": "raid1", 00:20:24.706 "superblock": true, 00:20:24.706 "num_base_bdevs": 2, 00:20:24.706 "num_base_bdevs_discovered": 1, 00:20:24.706 "num_base_bdevs_operational": 1, 00:20:24.706 "base_bdevs_list": [ 00:20:24.706 { 00:20:24.706 "name": null, 00:20:24.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.706 "is_configured": false, 00:20:24.706 "data_offset": 0, 00:20:24.706 "data_size": 7936 00:20:24.706 }, 00:20:24.706 { 00:20:24.706 "name": "BaseBdev2", 00:20:24.706 "uuid": "16a108ca-bd5c-4f89-b2ca-4c7decebe23c", 00:20:24.706 "is_configured": true, 00:20:24.706 "data_offset": 256, 00:20:24.706 "data_size": 7936 00:20:24.706 } 00:20:24.706 ] 00:20:24.706 }' 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.706 14:46:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.273 [2024-11-04 14:46:24.198492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:25.273 [2024-11-04 14:46:24.198624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:25.273 [2024-11-04 14:46:24.285152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.273 [2024-11-04 14:46:24.285229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.273 [2024-11-04 14:46:24.285249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86341 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86341 ']' 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86341 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86341 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:25.273 killing process with pid 86341 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86341' 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86341 00:20:25.273 [2024-11-04 14:46:24.379228] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:25.273 14:46:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86341 00:20:25.273 [2024-11-04 14:46:24.393788] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.704 14:46:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:20:26.704 00:20:26.704 real 0m5.334s 00:20:26.704 user 0m8.030s 00:20:26.704 sys 0m0.777s 00:20:26.704 14:46:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:26.704 ************************************ 00:20:26.704 END TEST raid_state_function_test_sb_4k 00:20:26.704 ************************************ 00:20:26.704 14:46:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.704 14:46:25 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:20:26.704 14:46:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:20:26.704 14:46:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:26.704 14:46:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:26.704 ************************************ 00:20:26.704 START TEST raid_superblock_test_4k 00:20:26.704 ************************************ 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86592 00:20:26.704 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86592 00:20:26.705 14:46:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:26.705 14:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86592 ']' 00:20:26.705 14:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.705 14:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:26.705 14:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.705 14:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:26.705 14:46:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:26.705 [2024-11-04 14:46:25.591134] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:20:26.705 [2024-11-04 14:46:25.591317] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86592 ] 00:20:26.705 [2024-11-04 14:46:25.776268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.963 [2024-11-04 14:46:25.908206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.220 [2024-11-04 14:46:26.111714] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.220 [2024-11-04 14:46:26.111774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.477 malloc1 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.477 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.736 [2024-11-04 14:46:26.598872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:27.736 [2024-11-04 14:46:26.598981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.736 [2024-11-04 14:46:26.599020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:27.736 [2024-11-04 14:46:26.599037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.736 [2024-11-04 14:46:26.601887] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.736 [2024-11-04 14:46:26.602100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:27.736 pt1 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.736 malloc2 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.736 [2024-11-04 14:46:26.655286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:27.736 [2024-11-04 14:46:26.655383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.736 [2024-11-04 14:46:26.655419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:27.736 [2024-11-04 14:46:26.655435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.736 [2024-11-04 14:46:26.658347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.736 [2024-11-04 14:46:26.658395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:27.736 pt2 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.736 [2024-11-04 14:46:26.667477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:27.736 [2024-11-04 14:46:26.669994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:27.736 [2024-11-04 14:46:26.670268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:27.736 [2024-11-04 14:46:26.670293] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:27.736 [2024-11-04 14:46:26.670631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:27.736 [2024-11-04 14:46:26.670849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:27.736 [2024-11-04 14:46:26.670874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:27.736 [2024-11-04 14:46:26.671111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.736 "name": "raid_bdev1", 00:20:27.736 "uuid": "4a831a47-7f5b-47b9-b6d8-2ca3703d195f", 00:20:27.736 "strip_size_kb": 0, 00:20:27.736 "state": "online", 00:20:27.736 "raid_level": "raid1", 00:20:27.736 "superblock": true, 00:20:27.736 "num_base_bdevs": 2, 00:20:27.736 "num_base_bdevs_discovered": 2, 00:20:27.736 "num_base_bdevs_operational": 2, 00:20:27.736 "base_bdevs_list": [ 00:20:27.736 { 00:20:27.736 "name": "pt1", 00:20:27.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:27.736 "is_configured": true, 00:20:27.736 "data_offset": 256, 00:20:27.736 "data_size": 7936 00:20:27.736 }, 00:20:27.736 { 00:20:27.736 "name": "pt2", 00:20:27.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:27.736 "is_configured": true, 00:20:27.736 "data_offset": 256, 00:20:27.736 "data_size": 7936 00:20:27.736 } 00:20:27.736 ] 00:20:27.736 }' 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.736 14:46:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.302 [2024-11-04 14:46:27.191874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.302 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:28.302 "name": "raid_bdev1", 00:20:28.302 "aliases": [ 00:20:28.302 "4a831a47-7f5b-47b9-b6d8-2ca3703d195f" 00:20:28.302 ], 00:20:28.302 "product_name": "Raid Volume", 00:20:28.302 "block_size": 4096, 00:20:28.302 "num_blocks": 7936, 00:20:28.302 "uuid": "4a831a47-7f5b-47b9-b6d8-2ca3703d195f", 00:20:28.302 "assigned_rate_limits": { 00:20:28.302 "rw_ios_per_sec": 0, 00:20:28.302 "rw_mbytes_per_sec": 0, 00:20:28.302 "r_mbytes_per_sec": 0, 00:20:28.302 "w_mbytes_per_sec": 0 00:20:28.302 }, 00:20:28.302 "claimed": false, 00:20:28.302 "zoned": false, 00:20:28.302 "supported_io_types": { 00:20:28.302 "read": true, 00:20:28.302 "write": true, 00:20:28.302 "unmap": false, 00:20:28.302 "flush": false, 00:20:28.302 "reset": true, 00:20:28.302 "nvme_admin": false, 00:20:28.302 "nvme_io": false, 00:20:28.303 "nvme_io_md": false, 00:20:28.303 "write_zeroes": true, 00:20:28.303 "zcopy": false, 00:20:28.303 "get_zone_info": false, 00:20:28.303 "zone_management": false, 00:20:28.303 "zone_append": false, 00:20:28.303 "compare": false, 00:20:28.303 "compare_and_write": false, 00:20:28.303 "abort": false, 00:20:28.303 "seek_hole": false, 00:20:28.303 "seek_data": false, 00:20:28.303 "copy": false, 00:20:28.303 "nvme_iov_md": false 00:20:28.303 }, 00:20:28.303 "memory_domains": [ 00:20:28.303 { 00:20:28.303 "dma_device_id": "system", 00:20:28.303 "dma_device_type": 1 00:20:28.303 }, 00:20:28.303 { 00:20:28.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.303 "dma_device_type": 2 00:20:28.303 }, 00:20:28.303 { 00:20:28.303 "dma_device_id": "system", 00:20:28.303 "dma_device_type": 1 00:20:28.303 }, 00:20:28.303 { 00:20:28.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.303 "dma_device_type": 2 00:20:28.303 } 00:20:28.303 ], 00:20:28.303 "driver_specific": { 00:20:28.303 "raid": { 00:20:28.303 "uuid": "4a831a47-7f5b-47b9-b6d8-2ca3703d195f", 00:20:28.303 "strip_size_kb": 0, 00:20:28.303 "state": "online", 00:20:28.303 "raid_level": "raid1", 00:20:28.303 "superblock": true, 00:20:28.303 "num_base_bdevs": 2, 00:20:28.303 "num_base_bdevs_discovered": 2, 00:20:28.303 "num_base_bdevs_operational": 2, 00:20:28.303 "base_bdevs_list": [ 00:20:28.303 { 00:20:28.303 "name": "pt1", 00:20:28.303 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:28.303 "is_configured": true, 00:20:28.303 "data_offset": 256, 00:20:28.303 "data_size": 7936 00:20:28.303 }, 00:20:28.303 { 00:20:28.303 "name": "pt2", 00:20:28.303 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:28.303 "is_configured": true, 00:20:28.303 "data_offset": 256, 00:20:28.303 "data_size": 7936 00:20:28.303 } 00:20:28.303 ] 00:20:28.303 } 00:20:28.303 } 00:20:28.303 }' 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:28.303 pt2' 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.303 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.561 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.562 [2024-11-04 14:46:27.443944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4a831a47-7f5b-47b9-b6d8-2ca3703d195f 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 4a831a47-7f5b-47b9-b6d8-2ca3703d195f ']' 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.562 [2024-11-04 14:46:27.503573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:28.562 [2024-11-04 14:46:27.503612] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.562 [2024-11-04 14:46:27.503723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.562 [2024-11-04 14:46:27.503804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.562 [2024-11-04 14:46:27.503824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.562 [2024-11-04 14:46:27.631654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:28.562 [2024-11-04 14:46:27.634165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:28.562 [2024-11-04 14:46:27.634273] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:28.562 [2024-11-04 14:46:27.634354] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:28.562 [2024-11-04 14:46:27.634381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:28.562 [2024-11-04 14:46:27.634396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:28.562 request: 00:20:28.562 { 00:20:28.562 "name": "raid_bdev1", 00:20:28.562 "raid_level": "raid1", 00:20:28.562 "base_bdevs": [ 00:20:28.562 "malloc1", 00:20:28.562 "malloc2" 00:20:28.562 ], 00:20:28.562 "superblock": false, 00:20:28.562 "method": "bdev_raid_create", 00:20:28.562 "req_id": 1 00:20:28.562 } 00:20:28.562 Got JSON-RPC error response 00:20:28.562 response: 00:20:28.562 { 00:20:28.562 "code": -17, 00:20:28.562 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:28.562 } 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.562 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.820 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:28.820 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:28.820 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.821 [2024-11-04 14:46:27.699647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:28.821 [2024-11-04 14:46:27.699729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.821 [2024-11-04 14:46:27.699757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:28.821 [2024-11-04 14:46:27.699775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.821 [2024-11-04 14:46:27.702603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.821 [2024-11-04 14:46:27.702653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:28.821 [2024-11-04 14:46:27.702763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:28.821 [2024-11-04 14:46:27.702847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:28.821 pt1 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.821 "name": "raid_bdev1", 00:20:28.821 "uuid": "4a831a47-7f5b-47b9-b6d8-2ca3703d195f", 00:20:28.821 "strip_size_kb": 0, 00:20:28.821 "state": "configuring", 00:20:28.821 "raid_level": "raid1", 00:20:28.821 "superblock": true, 00:20:28.821 "num_base_bdevs": 2, 00:20:28.821 "num_base_bdevs_discovered": 1, 00:20:28.821 "num_base_bdevs_operational": 2, 00:20:28.821 "base_bdevs_list": [ 00:20:28.821 { 00:20:28.821 "name": "pt1", 00:20:28.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:28.821 "is_configured": true, 00:20:28.821 "data_offset": 256, 00:20:28.821 "data_size": 7936 00:20:28.821 }, 00:20:28.821 { 00:20:28.821 "name": null, 00:20:28.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:28.821 "is_configured": false, 00:20:28.821 "data_offset": 256, 00:20:28.821 "data_size": 7936 00:20:28.821 } 00:20:28.821 ] 00:20:28.821 }' 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.821 14:46:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.079 [2024-11-04 14:46:28.183782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:29.079 [2024-11-04 14:46:28.183870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:29.079 [2024-11-04 14:46:28.183901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:29.079 [2024-11-04 14:46:28.183920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:29.079 [2024-11-04 14:46:28.184541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:29.079 [2024-11-04 14:46:28.184586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:29.079 [2024-11-04 14:46:28.184700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:29.079 [2024-11-04 14:46:28.184737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:29.079 [2024-11-04 14:46:28.184899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:29.079 [2024-11-04 14:46:28.184920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:29.079 [2024-11-04 14:46:28.185239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:29.079 [2024-11-04 14:46:28.185449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:29.079 [2024-11-04 14:46:28.185465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:29.079 [2024-11-04 14:46:28.185641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.079 pt2 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.079 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.337 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.337 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.337 "name": "raid_bdev1", 00:20:29.337 "uuid": "4a831a47-7f5b-47b9-b6d8-2ca3703d195f", 00:20:29.337 "strip_size_kb": 0, 00:20:29.337 "state": "online", 00:20:29.337 "raid_level": "raid1", 00:20:29.337 "superblock": true, 00:20:29.337 "num_base_bdevs": 2, 00:20:29.337 "num_base_bdevs_discovered": 2, 00:20:29.337 "num_base_bdevs_operational": 2, 00:20:29.337 "base_bdevs_list": [ 00:20:29.337 { 00:20:29.337 "name": "pt1", 00:20:29.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:29.337 "is_configured": true, 00:20:29.337 "data_offset": 256, 00:20:29.337 "data_size": 7936 00:20:29.337 }, 00:20:29.337 { 00:20:29.337 "name": "pt2", 00:20:29.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:29.337 "is_configured": true, 00:20:29.337 "data_offset": 256, 00:20:29.337 "data_size": 7936 00:20:29.337 } 00:20:29.337 ] 00:20:29.337 }' 00:20:29.337 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.337 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:29.596 [2024-11-04 14:46:28.696210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.596 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:29.855 "name": "raid_bdev1", 00:20:29.855 "aliases": [ 00:20:29.855 "4a831a47-7f5b-47b9-b6d8-2ca3703d195f" 00:20:29.855 ], 00:20:29.855 "product_name": "Raid Volume", 00:20:29.855 "block_size": 4096, 00:20:29.855 "num_blocks": 7936, 00:20:29.855 "uuid": "4a831a47-7f5b-47b9-b6d8-2ca3703d195f", 00:20:29.855 "assigned_rate_limits": { 00:20:29.855 "rw_ios_per_sec": 0, 00:20:29.855 "rw_mbytes_per_sec": 0, 00:20:29.855 "r_mbytes_per_sec": 0, 00:20:29.855 "w_mbytes_per_sec": 0 00:20:29.855 }, 00:20:29.855 "claimed": false, 00:20:29.855 "zoned": false, 00:20:29.855 "supported_io_types": { 00:20:29.855 "read": true, 00:20:29.855 "write": true, 00:20:29.855 "unmap": false, 00:20:29.855 "flush": false, 00:20:29.855 "reset": true, 00:20:29.855 "nvme_admin": false, 00:20:29.855 "nvme_io": false, 00:20:29.855 "nvme_io_md": false, 00:20:29.855 "write_zeroes": true, 00:20:29.855 "zcopy": false, 00:20:29.855 "get_zone_info": false, 00:20:29.855 "zone_management": false, 00:20:29.855 "zone_append": false, 00:20:29.855 "compare": false, 00:20:29.855 "compare_and_write": false, 00:20:29.855 "abort": false, 00:20:29.855 "seek_hole": false, 00:20:29.855 "seek_data": false, 00:20:29.855 "copy": false, 00:20:29.855 "nvme_iov_md": false 00:20:29.855 }, 00:20:29.855 "memory_domains": [ 00:20:29.855 { 00:20:29.855 "dma_device_id": "system", 00:20:29.855 "dma_device_type": 1 00:20:29.855 }, 00:20:29.855 { 00:20:29.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.855 "dma_device_type": 2 00:20:29.855 }, 00:20:29.855 { 00:20:29.855 "dma_device_id": "system", 00:20:29.855 "dma_device_type": 1 00:20:29.855 }, 00:20:29.855 { 00:20:29.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:29.855 "dma_device_type": 2 00:20:29.855 } 00:20:29.855 ], 00:20:29.855 "driver_specific": { 00:20:29.855 "raid": { 00:20:29.855 "uuid": "4a831a47-7f5b-47b9-b6d8-2ca3703d195f", 00:20:29.855 "strip_size_kb": 0, 00:20:29.855 "state": "online", 00:20:29.855 "raid_level": "raid1", 00:20:29.855 "superblock": true, 00:20:29.855 "num_base_bdevs": 2, 00:20:29.855 "num_base_bdevs_discovered": 2, 00:20:29.855 "num_base_bdevs_operational": 2, 00:20:29.855 "base_bdevs_list": [ 00:20:29.855 { 00:20:29.855 "name": "pt1", 00:20:29.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:29.855 "is_configured": true, 00:20:29.855 "data_offset": 256, 00:20:29.855 "data_size": 7936 00:20:29.855 }, 00:20:29.855 { 00:20:29.855 "name": "pt2", 00:20:29.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:29.855 "is_configured": true, 00:20:29.855 "data_offset": 256, 00:20:29.855 "data_size": 7936 00:20:29.855 } 00:20:29.855 ] 00:20:29.855 } 00:20:29.855 } 00:20:29.855 }' 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:29.855 pt2' 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:29.855 [2024-11-04 14:46:28.956279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.855 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.114 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 4a831a47-7f5b-47b9-b6d8-2ca3703d195f '!=' 4a831a47-7f5b-47b9-b6d8-2ca3703d195f ']' 00:20:30.114 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:30.114 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:30.114 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:20:30.114 14:46:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:30.114 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.114 14:46:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.114 [2024-11-04 14:46:29.004185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.114 "name": "raid_bdev1", 00:20:30.114 "uuid": "4a831a47-7f5b-47b9-b6d8-2ca3703d195f", 00:20:30.114 "strip_size_kb": 0, 00:20:30.114 "state": "online", 00:20:30.114 "raid_level": "raid1", 00:20:30.114 "superblock": true, 00:20:30.114 "num_base_bdevs": 2, 00:20:30.114 "num_base_bdevs_discovered": 1, 00:20:30.114 "num_base_bdevs_operational": 1, 00:20:30.114 "base_bdevs_list": [ 00:20:30.114 { 00:20:30.114 "name": null, 00:20:30.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.114 "is_configured": false, 00:20:30.114 "data_offset": 0, 00:20:30.114 "data_size": 7936 00:20:30.114 }, 00:20:30.114 { 00:20:30.114 "name": "pt2", 00:20:30.114 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.114 "is_configured": true, 00:20:30.114 "data_offset": 256, 00:20:30.114 "data_size": 7936 00:20:30.114 } 00:20:30.114 ] 00:20:30.114 }' 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.114 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 [2024-11-04 14:46:29.556135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.680 [2024-11-04 14:46:29.556175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:30.680 [2024-11-04 14:46:29.556272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:30.680 [2024-11-04 14:46:29.556337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:30.680 [2024-11-04 14:46:29.556356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 [2024-11-04 14:46:29.620145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:30.680 [2024-11-04 14:46:29.620226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.680 [2024-11-04 14:46:29.620253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:30.680 [2024-11-04 14:46:29.620271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.680 [2024-11-04 14:46:29.623471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.680 [2024-11-04 14:46:29.623523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:30.680 [2024-11-04 14:46:29.623631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:30.680 [2024-11-04 14:46:29.623696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:30.680 [2024-11-04 14:46:29.623828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:30.680 [2024-11-04 14:46:29.623851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:30.680 [2024-11-04 14:46:29.624169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:30.680 [2024-11-04 14:46:29.624366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:30.680 [2024-11-04 14:46:29.624383] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:30.680 [2024-11-04 14:46:29.624616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.680 pt2 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.680 "name": "raid_bdev1", 00:20:30.680 "uuid": "4a831a47-7f5b-47b9-b6d8-2ca3703d195f", 00:20:30.680 "strip_size_kb": 0, 00:20:30.680 "state": "online", 00:20:30.680 "raid_level": "raid1", 00:20:30.680 "superblock": true, 00:20:30.680 "num_base_bdevs": 2, 00:20:30.680 "num_base_bdevs_discovered": 1, 00:20:30.680 "num_base_bdevs_operational": 1, 00:20:30.680 "base_bdevs_list": [ 00:20:30.680 { 00:20:30.680 "name": null, 00:20:30.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.680 "is_configured": false, 00:20:30.680 "data_offset": 256, 00:20:30.680 "data_size": 7936 00:20:30.680 }, 00:20:30.680 { 00:20:30.680 "name": "pt2", 00:20:30.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:30.680 "is_configured": true, 00:20:30.680 "data_offset": 256, 00:20:30.680 "data_size": 7936 00:20:30.680 } 00:20:30.680 ] 00:20:30.680 }' 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.680 14:46:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.297 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:31.297 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.297 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.297 [2024-11-04 14:46:30.136664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:31.297 [2024-11-04 14:46:30.136707] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:31.297 [2024-11-04 14:46:30.136803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.298 [2024-11-04 14:46:30.136872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.298 [2024-11-04 14:46:30.136888] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.298 [2024-11-04 14:46:30.200724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:31.298 [2024-11-04 14:46:30.200798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.298 [2024-11-04 14:46:30.200829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:31.298 [2024-11-04 14:46:30.200844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.298 [2024-11-04 14:46:30.203745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.298 [2024-11-04 14:46:30.203786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:31.298 [2024-11-04 14:46:30.203896] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:31.298 [2024-11-04 14:46:30.203996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:31.298 [2024-11-04 14:46:30.204179] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:31.298 [2024-11-04 14:46:30.204197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:31.298 [2024-11-04 14:46:30.204220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:31.298 [2024-11-04 14:46:30.204296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:31.298 [2024-11-04 14:46:30.204405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:31.298 [2024-11-04 14:46:30.204421] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:31.298 [2024-11-04 14:46:30.204729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:31.298 [2024-11-04 14:46:30.204913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:31.298 [2024-11-04 14:46:30.204960] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:31.298 [2024-11-04 14:46:30.205205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.298 pt1 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.298 "name": "raid_bdev1", 00:20:31.298 "uuid": "4a831a47-7f5b-47b9-b6d8-2ca3703d195f", 00:20:31.298 "strip_size_kb": 0, 00:20:31.298 "state": "online", 00:20:31.298 "raid_level": "raid1", 00:20:31.298 "superblock": true, 00:20:31.298 "num_base_bdevs": 2, 00:20:31.298 "num_base_bdevs_discovered": 1, 00:20:31.298 "num_base_bdevs_operational": 1, 00:20:31.298 "base_bdevs_list": [ 00:20:31.298 { 00:20:31.298 "name": null, 00:20:31.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.298 "is_configured": false, 00:20:31.298 "data_offset": 256, 00:20:31.298 "data_size": 7936 00:20:31.298 }, 00:20:31.298 { 00:20:31.298 "name": "pt2", 00:20:31.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.298 "is_configured": true, 00:20:31.298 "data_offset": 256, 00:20:31.298 "data_size": 7936 00:20:31.298 } 00:20:31.298 ] 00:20:31.298 }' 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.298 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:31.863 [2024-11-04 14:46:30.793569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 4a831a47-7f5b-47b9-b6d8-2ca3703d195f '!=' 4a831a47-7f5b-47b9-b6d8-2ca3703d195f ']' 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86592 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86592 ']' 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86592 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:31.863 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86592 00:20:31.863 killing process with pid 86592 00:20:31.864 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:31.864 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:31.864 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86592' 00:20:31.864 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86592 00:20:31.864 [2024-11-04 14:46:30.871948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:31.864 [2024-11-04 14:46:30.872068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:31.864 14:46:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86592 00:20:31.864 [2024-11-04 14:46:30.872133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:31.864 [2024-11-04 14:46:30.872155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:32.122 [2024-11-04 14:46:31.060859] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:33.058 14:46:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:20:33.058 00:20:33.058 real 0m6.620s 00:20:33.058 user 0m10.476s 00:20:33.058 sys 0m0.950s 00:20:33.058 ************************************ 00:20:33.058 END TEST raid_superblock_test_4k 00:20:33.058 ************************************ 00:20:33.058 14:46:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:33.058 14:46:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:20:33.058 14:46:32 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:20:33.058 14:46:32 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:20:33.058 14:46:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:33.058 14:46:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:33.058 14:46:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.058 ************************************ 00:20:33.058 START TEST raid_rebuild_test_sb_4k 00:20:33.058 ************************************ 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86922 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86922 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86922 ']' 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:33.058 14:46:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:33.317 [2024-11-04 14:46:32.231715] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:20:33.317 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:33.317 Zero copy mechanism will not be used. 00:20:33.317 [2024-11-04 14:46:32.232162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86922 ] 00:20:33.317 [2024-11-04 14:46:32.405081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.581 [2024-11-04 14:46:32.536137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.840 [2024-11-04 14:46:32.747652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:33.840 [2024-11-04 14:46:32.747699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.405 BaseBdev1_malloc 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.405 [2024-11-04 14:46:33.324441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:34.405 [2024-11-04 14:46:33.324700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.405 [2024-11-04 14:46:33.324867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:34.405 [2024-11-04 14:46:33.325006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.405 [2024-11-04 14:46:33.327876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.405 [2024-11-04 14:46:33.327968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:34.405 BaseBdev1 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.405 BaseBdev2_malloc 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.405 [2024-11-04 14:46:33.377194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:34.405 [2024-11-04 14:46:33.377405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.405 [2024-11-04 14:46:33.377476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:34.405 [2024-11-04 14:46:33.377639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.405 [2024-11-04 14:46:33.380592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.405 [2024-11-04 14:46:33.380640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:34.405 BaseBdev2 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.405 spare_malloc 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.405 spare_delay 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.405 [2024-11-04 14:46:33.451861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:34.405 [2024-11-04 14:46:33.452087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.405 [2024-11-04 14:46:33.452128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:34.405 [2024-11-04 14:46:33.452151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.405 [2024-11-04 14:46:33.455054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.405 [2024-11-04 14:46:33.455119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:34.405 spare 00:20:34.405 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.406 [2024-11-04 14:46:33.460007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.406 [2024-11-04 14:46:33.462664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:34.406 [2024-11-04 14:46:33.463074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:34.406 [2024-11-04 14:46:33.463215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:34.406 [2024-11-04 14:46:33.463570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:34.406 [2024-11-04 14:46:33.463901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:34.406 [2024-11-04 14:46:33.464051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:34.406 [2024-11-04 14:46:33.464436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.406 "name": "raid_bdev1", 00:20:34.406 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:34.406 "strip_size_kb": 0, 00:20:34.406 "state": "online", 00:20:34.406 "raid_level": "raid1", 00:20:34.406 "superblock": true, 00:20:34.406 "num_base_bdevs": 2, 00:20:34.406 "num_base_bdevs_discovered": 2, 00:20:34.406 "num_base_bdevs_operational": 2, 00:20:34.406 "base_bdevs_list": [ 00:20:34.406 { 00:20:34.406 "name": "BaseBdev1", 00:20:34.406 "uuid": "e52d0a4d-94ea-5a1a-a318-733fac571963", 00:20:34.406 "is_configured": true, 00:20:34.406 "data_offset": 256, 00:20:34.406 "data_size": 7936 00:20:34.406 }, 00:20:34.406 { 00:20:34.406 "name": "BaseBdev2", 00:20:34.406 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:34.406 "is_configured": true, 00:20:34.406 "data_offset": 256, 00:20:34.406 "data_size": 7936 00:20:34.406 } 00:20:34.406 ] 00:20:34.406 }' 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.406 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.973 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:34.973 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.973 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.973 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:34.973 [2024-11-04 14:46:33.972912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.973 14:46:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:34.973 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:35.231 [2024-11-04 14:46:34.352700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:35.490 /dev/nbd0 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:35.490 1+0 records in 00:20:35.490 1+0 records out 00:20:35.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319214 s, 12.8 MB/s 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:35.490 14:46:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:36.426 7936+0 records in 00:20:36.426 7936+0 records out 00:20:36.426 32505856 bytes (33 MB, 31 MiB) copied, 0.972192 s, 33.4 MB/s 00:20:36.426 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:36.426 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:36.426 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:36.426 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:36.426 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:36.426 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:36.426 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:36.685 [2024-11-04 14:46:35.630430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.685 [2024-11-04 14:46:35.666553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.685 "name": "raid_bdev1", 00:20:36.685 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:36.685 "strip_size_kb": 0, 00:20:36.685 "state": "online", 00:20:36.685 "raid_level": "raid1", 00:20:36.685 "superblock": true, 00:20:36.685 "num_base_bdevs": 2, 00:20:36.685 "num_base_bdevs_discovered": 1, 00:20:36.685 "num_base_bdevs_operational": 1, 00:20:36.685 "base_bdevs_list": [ 00:20:36.685 { 00:20:36.685 "name": null, 00:20:36.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.685 "is_configured": false, 00:20:36.685 "data_offset": 0, 00:20:36.685 "data_size": 7936 00:20:36.685 }, 00:20:36.685 { 00:20:36.685 "name": "BaseBdev2", 00:20:36.685 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:36.685 "is_configured": true, 00:20:36.685 "data_offset": 256, 00:20:36.685 "data_size": 7936 00:20:36.685 } 00:20:36.685 ] 00:20:36.685 }' 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.685 14:46:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:37.258 14:46:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:37.258 14:46:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.258 14:46:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:37.258 [2024-11-04 14:46:36.178736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:37.258 [2024-11-04 14:46:36.195182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:37.258 14:46:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.258 14:46:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:37.258 [2024-11-04 14:46:36.197725] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.194 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.194 "name": "raid_bdev1", 00:20:38.194 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:38.194 "strip_size_kb": 0, 00:20:38.194 "state": "online", 00:20:38.194 "raid_level": "raid1", 00:20:38.194 "superblock": true, 00:20:38.194 "num_base_bdevs": 2, 00:20:38.194 "num_base_bdevs_discovered": 2, 00:20:38.194 "num_base_bdevs_operational": 2, 00:20:38.194 "process": { 00:20:38.194 "type": "rebuild", 00:20:38.194 "target": "spare", 00:20:38.194 "progress": { 00:20:38.194 "blocks": 2560, 00:20:38.194 "percent": 32 00:20:38.194 } 00:20:38.194 }, 00:20:38.194 "base_bdevs_list": [ 00:20:38.194 { 00:20:38.194 "name": "spare", 00:20:38.194 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:38.194 "is_configured": true, 00:20:38.194 "data_offset": 256, 00:20:38.195 "data_size": 7936 00:20:38.195 }, 00:20:38.195 { 00:20:38.195 "name": "BaseBdev2", 00:20:38.195 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:38.195 "is_configured": true, 00:20:38.195 "data_offset": 256, 00:20:38.195 "data_size": 7936 00:20:38.195 } 00:20:38.195 ] 00:20:38.195 }' 00:20:38.195 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.195 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.195 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:38.453 [2024-11-04 14:46:37.371084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.453 [2024-11-04 14:46:37.407102] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:38.453 [2024-11-04 14:46:37.407408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:38.453 [2024-11-04 14:46:37.407437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:38.453 [2024-11-04 14:46:37.407458] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.453 "name": "raid_bdev1", 00:20:38.453 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:38.453 "strip_size_kb": 0, 00:20:38.453 "state": "online", 00:20:38.453 "raid_level": "raid1", 00:20:38.453 "superblock": true, 00:20:38.453 "num_base_bdevs": 2, 00:20:38.453 "num_base_bdevs_discovered": 1, 00:20:38.453 "num_base_bdevs_operational": 1, 00:20:38.453 "base_bdevs_list": [ 00:20:38.453 { 00:20:38.453 "name": null, 00:20:38.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.453 "is_configured": false, 00:20:38.453 "data_offset": 0, 00:20:38.453 "data_size": 7936 00:20:38.453 }, 00:20:38.453 { 00:20:38.453 "name": "BaseBdev2", 00:20:38.453 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:38.453 "is_configured": true, 00:20:38.453 "data_offset": 256, 00:20:38.453 "data_size": 7936 00:20:38.453 } 00:20:38.453 ] 00:20:38.453 }' 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.453 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.020 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:39.020 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:39.020 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:39.020 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:39.020 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:39.021 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.021 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.021 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.021 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.021 14:46:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.021 14:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:39.021 "name": "raid_bdev1", 00:20:39.021 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:39.021 "strip_size_kb": 0, 00:20:39.021 "state": "online", 00:20:39.021 "raid_level": "raid1", 00:20:39.021 "superblock": true, 00:20:39.021 "num_base_bdevs": 2, 00:20:39.021 "num_base_bdevs_discovered": 1, 00:20:39.021 "num_base_bdevs_operational": 1, 00:20:39.021 "base_bdevs_list": [ 00:20:39.021 { 00:20:39.021 "name": null, 00:20:39.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.021 "is_configured": false, 00:20:39.021 "data_offset": 0, 00:20:39.021 "data_size": 7936 00:20:39.021 }, 00:20:39.021 { 00:20:39.021 "name": "BaseBdev2", 00:20:39.021 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:39.021 "is_configured": true, 00:20:39.021 "data_offset": 256, 00:20:39.021 "data_size": 7936 00:20:39.021 } 00:20:39.021 ] 00:20:39.021 }' 00:20:39.021 14:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:39.021 14:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:39.021 14:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.021 14:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:39.021 14:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:39.021 14:46:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.021 14:46:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:39.021 [2024-11-04 14:46:38.120962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:39.021 [2024-11-04 14:46:38.136858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:39.021 14:46:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.021 14:46:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:39.021 [2024-11-04 14:46:38.139518] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.396 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.396 "name": "raid_bdev1", 00:20:40.396 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:40.396 "strip_size_kb": 0, 00:20:40.396 "state": "online", 00:20:40.396 "raid_level": "raid1", 00:20:40.396 "superblock": true, 00:20:40.396 "num_base_bdevs": 2, 00:20:40.396 "num_base_bdevs_discovered": 2, 00:20:40.396 "num_base_bdevs_operational": 2, 00:20:40.396 "process": { 00:20:40.396 "type": "rebuild", 00:20:40.396 "target": "spare", 00:20:40.396 "progress": { 00:20:40.396 "blocks": 2560, 00:20:40.396 "percent": 32 00:20:40.396 } 00:20:40.396 }, 00:20:40.396 "base_bdevs_list": [ 00:20:40.396 { 00:20:40.396 "name": "spare", 00:20:40.396 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:40.396 "is_configured": true, 00:20:40.396 "data_offset": 256, 00:20:40.396 "data_size": 7936 00:20:40.396 }, 00:20:40.396 { 00:20:40.396 "name": "BaseBdev2", 00:20:40.396 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:40.396 "is_configured": true, 00:20:40.397 "data_offset": 256, 00:20:40.397 "data_size": 7936 00:20:40.397 } 00:20:40.397 ] 00:20:40.397 }' 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:40.397 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=732 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.397 "name": "raid_bdev1", 00:20:40.397 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:40.397 "strip_size_kb": 0, 00:20:40.397 "state": "online", 00:20:40.397 "raid_level": "raid1", 00:20:40.397 "superblock": true, 00:20:40.397 "num_base_bdevs": 2, 00:20:40.397 "num_base_bdevs_discovered": 2, 00:20:40.397 "num_base_bdevs_operational": 2, 00:20:40.397 "process": { 00:20:40.397 "type": "rebuild", 00:20:40.397 "target": "spare", 00:20:40.397 "progress": { 00:20:40.397 "blocks": 2816, 00:20:40.397 "percent": 35 00:20:40.397 } 00:20:40.397 }, 00:20:40.397 "base_bdevs_list": [ 00:20:40.397 { 00:20:40.397 "name": "spare", 00:20:40.397 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:40.397 "is_configured": true, 00:20:40.397 "data_offset": 256, 00:20:40.397 "data_size": 7936 00:20:40.397 }, 00:20:40.397 { 00:20:40.397 "name": "BaseBdev2", 00:20:40.397 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:40.397 "is_configured": true, 00:20:40.397 "data_offset": 256, 00:20:40.397 "data_size": 7936 00:20:40.397 } 00:20:40.397 ] 00:20:40.397 }' 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.397 14:46:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:41.380 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.639 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:41.639 "name": "raid_bdev1", 00:20:41.639 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:41.639 "strip_size_kb": 0, 00:20:41.639 "state": "online", 00:20:41.639 "raid_level": "raid1", 00:20:41.639 "superblock": true, 00:20:41.639 "num_base_bdevs": 2, 00:20:41.639 "num_base_bdevs_discovered": 2, 00:20:41.639 "num_base_bdevs_operational": 2, 00:20:41.639 "process": { 00:20:41.639 "type": "rebuild", 00:20:41.639 "target": "spare", 00:20:41.639 "progress": { 00:20:41.639 "blocks": 5888, 00:20:41.639 "percent": 74 00:20:41.639 } 00:20:41.639 }, 00:20:41.639 "base_bdevs_list": [ 00:20:41.639 { 00:20:41.639 "name": "spare", 00:20:41.639 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:41.639 "is_configured": true, 00:20:41.639 "data_offset": 256, 00:20:41.639 "data_size": 7936 00:20:41.639 }, 00:20:41.639 { 00:20:41.639 "name": "BaseBdev2", 00:20:41.639 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:41.639 "is_configured": true, 00:20:41.639 "data_offset": 256, 00:20:41.639 "data_size": 7936 00:20:41.639 } 00:20:41.639 ] 00:20:41.639 }' 00:20:41.639 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:41.639 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:41.639 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:41.639 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:41.639 14:46:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:42.302 [2024-11-04 14:46:41.261611] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:42.302 [2024-11-04 14:46:41.261707] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:42.302 [2024-11-04 14:46:41.261859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.561 "name": "raid_bdev1", 00:20:42.561 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:42.561 "strip_size_kb": 0, 00:20:42.561 "state": "online", 00:20:42.561 "raid_level": "raid1", 00:20:42.561 "superblock": true, 00:20:42.561 "num_base_bdevs": 2, 00:20:42.561 "num_base_bdevs_discovered": 2, 00:20:42.561 "num_base_bdevs_operational": 2, 00:20:42.561 "base_bdevs_list": [ 00:20:42.561 { 00:20:42.561 "name": "spare", 00:20:42.561 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:42.561 "is_configured": true, 00:20:42.561 "data_offset": 256, 00:20:42.561 "data_size": 7936 00:20:42.561 }, 00:20:42.561 { 00:20:42.561 "name": "BaseBdev2", 00:20:42.561 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:42.561 "is_configured": true, 00:20:42.561 "data_offset": 256, 00:20:42.561 "data_size": 7936 00:20:42.561 } 00:20:42.561 ] 00:20:42.561 }' 00:20:42.561 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:42.820 "name": "raid_bdev1", 00:20:42.820 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:42.820 "strip_size_kb": 0, 00:20:42.820 "state": "online", 00:20:42.820 "raid_level": "raid1", 00:20:42.820 "superblock": true, 00:20:42.820 "num_base_bdevs": 2, 00:20:42.820 "num_base_bdevs_discovered": 2, 00:20:42.820 "num_base_bdevs_operational": 2, 00:20:42.820 "base_bdevs_list": [ 00:20:42.820 { 00:20:42.820 "name": "spare", 00:20:42.820 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:42.820 "is_configured": true, 00:20:42.820 "data_offset": 256, 00:20:42.820 "data_size": 7936 00:20:42.820 }, 00:20:42.820 { 00:20:42.820 "name": "BaseBdev2", 00:20:42.820 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:42.820 "is_configured": true, 00:20:42.820 "data_offset": 256, 00:20:42.820 "data_size": 7936 00:20:42.820 } 00:20:42.820 ] 00:20:42.820 }' 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:42.820 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.079 14:46:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.079 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.079 "name": "raid_bdev1", 00:20:43.079 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:43.079 "strip_size_kb": 0, 00:20:43.079 "state": "online", 00:20:43.080 "raid_level": "raid1", 00:20:43.080 "superblock": true, 00:20:43.080 "num_base_bdevs": 2, 00:20:43.080 "num_base_bdevs_discovered": 2, 00:20:43.080 "num_base_bdevs_operational": 2, 00:20:43.080 "base_bdevs_list": [ 00:20:43.080 { 00:20:43.080 "name": "spare", 00:20:43.080 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:43.080 "is_configured": true, 00:20:43.080 "data_offset": 256, 00:20:43.080 "data_size": 7936 00:20:43.080 }, 00:20:43.080 { 00:20:43.080 "name": "BaseBdev2", 00:20:43.080 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:43.080 "is_configured": true, 00:20:43.080 "data_offset": 256, 00:20:43.080 "data_size": 7936 00:20:43.080 } 00:20:43.080 ] 00:20:43.080 }' 00:20:43.080 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.080 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.647 [2024-11-04 14:46:42.497696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.647 [2024-11-04 14:46:42.497890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:43.647 [2024-11-04 14:46:42.498147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:43.647 [2024-11-04 14:46:42.498362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:43.647 [2024-11-04 14:46:42.498534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:43.647 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:43.906 /dev/nbd0 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.907 1+0 records in 00:20:43.907 1+0 records out 00:20:43.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275304 s, 14.9 MB/s 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:43.907 14:46:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:44.165 /dev/nbd1 00:20:44.165 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.425 1+0 records in 00:20:44.425 1+0 records out 00:20:44.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431283 s, 9.5 MB/s 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.425 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:44.992 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:44.992 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:44.992 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:44.992 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.992 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.992 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:44.992 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:44.992 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.992 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:44.992 14:46:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.992 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:44.992 [2024-11-04 14:46:44.111858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:44.992 [2024-11-04 14:46:44.112087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.992 [2024-11-04 14:46:44.112131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:44.992 [2024-11-04 14:46:44.112148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.250 [2024-11-04 14:46:44.115109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.250 [2024-11-04 14:46:44.115163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:45.250 [2024-11-04 14:46:44.115279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:45.250 [2024-11-04 14:46:44.115345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:45.250 [2024-11-04 14:46:44.115542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.250 spare 00:20:45.250 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.250 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:45.250 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.250 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.250 [2024-11-04 14:46:44.215694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:45.250 [2024-11-04 14:46:44.215955] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:45.251 [2024-11-04 14:46:44.216352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:45.251 [2024-11-04 14:46:44.216591] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:45.251 [2024-11-04 14:46:44.216611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:45.251 [2024-11-04 14:46:44.216847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.251 "name": "raid_bdev1", 00:20:45.251 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:45.251 "strip_size_kb": 0, 00:20:45.251 "state": "online", 00:20:45.251 "raid_level": "raid1", 00:20:45.251 "superblock": true, 00:20:45.251 "num_base_bdevs": 2, 00:20:45.251 "num_base_bdevs_discovered": 2, 00:20:45.251 "num_base_bdevs_operational": 2, 00:20:45.251 "base_bdevs_list": [ 00:20:45.251 { 00:20:45.251 "name": "spare", 00:20:45.251 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:45.251 "is_configured": true, 00:20:45.251 "data_offset": 256, 00:20:45.251 "data_size": 7936 00:20:45.251 }, 00:20:45.251 { 00:20:45.251 "name": "BaseBdev2", 00:20:45.251 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:45.251 "is_configured": true, 00:20:45.251 "data_offset": 256, 00:20:45.251 "data_size": 7936 00:20:45.251 } 00:20:45.251 ] 00:20:45.251 }' 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.251 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:45.817 "name": "raid_bdev1", 00:20:45.817 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:45.817 "strip_size_kb": 0, 00:20:45.817 "state": "online", 00:20:45.817 "raid_level": "raid1", 00:20:45.817 "superblock": true, 00:20:45.817 "num_base_bdevs": 2, 00:20:45.817 "num_base_bdevs_discovered": 2, 00:20:45.817 "num_base_bdevs_operational": 2, 00:20:45.817 "base_bdevs_list": [ 00:20:45.817 { 00:20:45.817 "name": "spare", 00:20:45.817 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:45.817 "is_configured": true, 00:20:45.817 "data_offset": 256, 00:20:45.817 "data_size": 7936 00:20:45.817 }, 00:20:45.817 { 00:20:45.817 "name": "BaseBdev2", 00:20:45.817 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:45.817 "is_configured": true, 00:20:45.817 "data_offset": 256, 00:20:45.817 "data_size": 7936 00:20:45.817 } 00:20:45.817 ] 00:20:45.817 }' 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.817 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.075 [2024-11-04 14:46:44.941020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:46.075 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.075 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:46.075 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:46.075 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:46.075 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.076 "name": "raid_bdev1", 00:20:46.076 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:46.076 "strip_size_kb": 0, 00:20:46.076 "state": "online", 00:20:46.076 "raid_level": "raid1", 00:20:46.076 "superblock": true, 00:20:46.076 "num_base_bdevs": 2, 00:20:46.076 "num_base_bdevs_discovered": 1, 00:20:46.076 "num_base_bdevs_operational": 1, 00:20:46.076 "base_bdevs_list": [ 00:20:46.076 { 00:20:46.076 "name": null, 00:20:46.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.076 "is_configured": false, 00:20:46.076 "data_offset": 0, 00:20:46.076 "data_size": 7936 00:20:46.076 }, 00:20:46.076 { 00:20:46.076 "name": "BaseBdev2", 00:20:46.076 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:46.076 "is_configured": true, 00:20:46.076 "data_offset": 256, 00:20:46.076 "data_size": 7936 00:20:46.076 } 00:20:46.076 ] 00:20:46.076 }' 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.076 14:46:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.363 14:46:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:46.363 14:46:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.363 14:46:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:46.363 [2024-11-04 14:46:45.421181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:46.363 [2024-11-04 14:46:45.421422] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:46.363 [2024-11-04 14:46:45.421452] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:46.363 [2024-11-04 14:46:45.421502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:46.363 [2024-11-04 14:46:45.436769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:46.363 14:46:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.363 14:46:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:46.363 [2024-11-04 14:46:45.439502] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:47.740 "name": "raid_bdev1", 00:20:47.740 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:47.740 "strip_size_kb": 0, 00:20:47.740 "state": "online", 00:20:47.740 "raid_level": "raid1", 00:20:47.740 "superblock": true, 00:20:47.740 "num_base_bdevs": 2, 00:20:47.740 "num_base_bdevs_discovered": 2, 00:20:47.740 "num_base_bdevs_operational": 2, 00:20:47.740 "process": { 00:20:47.740 "type": "rebuild", 00:20:47.740 "target": "spare", 00:20:47.740 "progress": { 00:20:47.740 "blocks": 2560, 00:20:47.740 "percent": 32 00:20:47.740 } 00:20:47.740 }, 00:20:47.740 "base_bdevs_list": [ 00:20:47.740 { 00:20:47.740 "name": "spare", 00:20:47.740 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:47.740 "is_configured": true, 00:20:47.740 "data_offset": 256, 00:20:47.740 "data_size": 7936 00:20:47.740 }, 00:20:47.740 { 00:20:47.740 "name": "BaseBdev2", 00:20:47.740 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:47.740 "is_configured": true, 00:20:47.740 "data_offset": 256, 00:20:47.740 "data_size": 7936 00:20:47.740 } 00:20:47.740 ] 00:20:47.740 }' 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.740 [2024-11-04 14:46:46.605085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:47.740 [2024-11-04 14:46:46.648265] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:47.740 [2024-11-04 14:46:46.648387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.740 [2024-11-04 14:46:46.648413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:47.740 [2024-11-04 14:46:46.648431] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.740 "name": "raid_bdev1", 00:20:47.740 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:47.740 "strip_size_kb": 0, 00:20:47.740 "state": "online", 00:20:47.740 "raid_level": "raid1", 00:20:47.740 "superblock": true, 00:20:47.740 "num_base_bdevs": 2, 00:20:47.740 "num_base_bdevs_discovered": 1, 00:20:47.740 "num_base_bdevs_operational": 1, 00:20:47.740 "base_bdevs_list": [ 00:20:47.740 { 00:20:47.740 "name": null, 00:20:47.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.740 "is_configured": false, 00:20:47.740 "data_offset": 0, 00:20:47.740 "data_size": 7936 00:20:47.740 }, 00:20:47.740 { 00:20:47.740 "name": "BaseBdev2", 00:20:47.740 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:47.740 "is_configured": true, 00:20:47.740 "data_offset": 256, 00:20:47.740 "data_size": 7936 00:20:47.740 } 00:20:47.740 ] 00:20:47.740 }' 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.740 14:46:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.336 14:46:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:48.336 14:46:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.336 14:46:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:48.336 [2024-11-04 14:46:47.144274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:48.336 [2024-11-04 14:46:47.144359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.336 [2024-11-04 14:46:47.144391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:48.336 [2024-11-04 14:46:47.144410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.336 [2024-11-04 14:46:47.145038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.336 [2024-11-04 14:46:47.145075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:48.336 [2024-11-04 14:46:47.145194] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:48.336 [2024-11-04 14:46:47.145219] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:48.336 [2024-11-04 14:46:47.145232] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:48.336 [2024-11-04 14:46:47.145269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:48.336 [2024-11-04 14:46:47.160664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:48.336 [2024-11-04 14:46:47.163182] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:48.336 spare 00:20:48.336 14:46:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.337 14:46:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.271 "name": "raid_bdev1", 00:20:49.271 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:49.271 "strip_size_kb": 0, 00:20:49.271 "state": "online", 00:20:49.271 "raid_level": "raid1", 00:20:49.271 "superblock": true, 00:20:49.271 "num_base_bdevs": 2, 00:20:49.271 "num_base_bdevs_discovered": 2, 00:20:49.271 "num_base_bdevs_operational": 2, 00:20:49.271 "process": { 00:20:49.271 "type": "rebuild", 00:20:49.271 "target": "spare", 00:20:49.271 "progress": { 00:20:49.271 "blocks": 2560, 00:20:49.271 "percent": 32 00:20:49.271 } 00:20:49.271 }, 00:20:49.271 "base_bdevs_list": [ 00:20:49.271 { 00:20:49.271 "name": "spare", 00:20:49.271 "uuid": "ad73a7a9-8667-5b23-bf6f-873b47aae216", 00:20:49.271 "is_configured": true, 00:20:49.271 "data_offset": 256, 00:20:49.271 "data_size": 7936 00:20:49.271 }, 00:20:49.271 { 00:20:49.271 "name": "BaseBdev2", 00:20:49.271 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:49.271 "is_configured": true, 00:20:49.271 "data_offset": 256, 00:20:49.271 "data_size": 7936 00:20:49.271 } 00:20:49.271 ] 00:20:49.271 }' 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.271 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.271 [2024-11-04 14:46:48.332989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:49.271 [2024-11-04 14:46:48.372216] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:49.271 [2024-11-04 14:46:48.372461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.271 [2024-11-04 14:46:48.372499] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:49.271 [2024-11-04 14:46:48.372513] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.531 "name": "raid_bdev1", 00:20:49.531 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:49.531 "strip_size_kb": 0, 00:20:49.531 "state": "online", 00:20:49.531 "raid_level": "raid1", 00:20:49.531 "superblock": true, 00:20:49.531 "num_base_bdevs": 2, 00:20:49.531 "num_base_bdevs_discovered": 1, 00:20:49.531 "num_base_bdevs_operational": 1, 00:20:49.531 "base_bdevs_list": [ 00:20:49.531 { 00:20:49.531 "name": null, 00:20:49.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.531 "is_configured": false, 00:20:49.531 "data_offset": 0, 00:20:49.531 "data_size": 7936 00:20:49.531 }, 00:20:49.531 { 00:20:49.531 "name": "BaseBdev2", 00:20:49.531 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:49.531 "is_configured": true, 00:20:49.531 "data_offset": 256, 00:20:49.531 "data_size": 7936 00:20:49.531 } 00:20:49.531 ] 00:20:49.531 }' 00:20:49.531 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.532 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:49.790 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.790 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.790 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:49.790 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:49.790 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.790 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.790 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.790 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.790 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.048 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.048 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.048 "name": "raid_bdev1", 00:20:50.049 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:50.049 "strip_size_kb": 0, 00:20:50.049 "state": "online", 00:20:50.049 "raid_level": "raid1", 00:20:50.049 "superblock": true, 00:20:50.049 "num_base_bdevs": 2, 00:20:50.049 "num_base_bdevs_discovered": 1, 00:20:50.049 "num_base_bdevs_operational": 1, 00:20:50.049 "base_bdevs_list": [ 00:20:50.049 { 00:20:50.049 "name": null, 00:20:50.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.049 "is_configured": false, 00:20:50.049 "data_offset": 0, 00:20:50.049 "data_size": 7936 00:20:50.049 }, 00:20:50.049 { 00:20:50.049 "name": "BaseBdev2", 00:20:50.049 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:50.049 "is_configured": true, 00:20:50.049 "data_offset": 256, 00:20:50.049 "data_size": 7936 00:20:50.049 } 00:20:50.049 ] 00:20:50.049 }' 00:20:50.049 14:46:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.049 [2024-11-04 14:46:49.080300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:50.049 [2024-11-04 14:46:49.080367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.049 [2024-11-04 14:46:49.080399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:50.049 [2024-11-04 14:46:49.080425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.049 [2024-11-04 14:46:49.081000] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.049 [2024-11-04 14:46:49.081032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:50.049 [2024-11-04 14:46:49.081138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:50.049 [2024-11-04 14:46:49.081159] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:50.049 [2024-11-04 14:46:49.081174] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:50.049 [2024-11-04 14:46:49.081186] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:50.049 BaseBdev1 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.049 14:46:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:50.983 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.241 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.241 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.241 "name": "raid_bdev1", 00:20:51.241 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:51.241 "strip_size_kb": 0, 00:20:51.241 "state": "online", 00:20:51.241 "raid_level": "raid1", 00:20:51.241 "superblock": true, 00:20:51.241 "num_base_bdevs": 2, 00:20:51.241 "num_base_bdevs_discovered": 1, 00:20:51.242 "num_base_bdevs_operational": 1, 00:20:51.242 "base_bdevs_list": [ 00:20:51.242 { 00:20:51.242 "name": null, 00:20:51.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.242 "is_configured": false, 00:20:51.242 "data_offset": 0, 00:20:51.242 "data_size": 7936 00:20:51.242 }, 00:20:51.242 { 00:20:51.242 "name": "BaseBdev2", 00:20:51.242 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:51.242 "is_configured": true, 00:20:51.242 "data_offset": 256, 00:20:51.242 "data_size": 7936 00:20:51.242 } 00:20:51.242 ] 00:20:51.242 }' 00:20:51.242 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.242 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.500 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:51.500 "name": "raid_bdev1", 00:20:51.500 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:51.500 "strip_size_kb": 0, 00:20:51.500 "state": "online", 00:20:51.500 "raid_level": "raid1", 00:20:51.500 "superblock": true, 00:20:51.500 "num_base_bdevs": 2, 00:20:51.500 "num_base_bdevs_discovered": 1, 00:20:51.500 "num_base_bdevs_operational": 1, 00:20:51.500 "base_bdevs_list": [ 00:20:51.500 { 00:20:51.500 "name": null, 00:20:51.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.500 "is_configured": false, 00:20:51.500 "data_offset": 0, 00:20:51.500 "data_size": 7936 00:20:51.500 }, 00:20:51.500 { 00:20:51.501 "name": "BaseBdev2", 00:20:51.501 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:51.501 "is_configured": true, 00:20:51.501 "data_offset": 256, 00:20:51.501 "data_size": 7936 00:20:51.501 } 00:20:51.501 ] 00:20:51.501 }' 00:20:51.501 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.759 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:51.760 [2024-11-04 14:46:50.720819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:51.760 [2024-11-04 14:46:50.721045] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:51.760 [2024-11-04 14:46:50.721069] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:51.760 request: 00:20:51.760 { 00:20:51.760 "base_bdev": "BaseBdev1", 00:20:51.760 "raid_bdev": "raid_bdev1", 00:20:51.760 "method": "bdev_raid_add_base_bdev", 00:20:51.760 "req_id": 1 00:20:51.760 } 00:20:51.760 Got JSON-RPC error response 00:20:51.760 response: 00:20:51.760 { 00:20:51.760 "code": -22, 00:20:51.760 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:51.760 } 00:20:51.760 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:51.760 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:20:51.760 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:51.760 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:51.760 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:51.760 14:46:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.697 "name": "raid_bdev1", 00:20:52.697 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:52.697 "strip_size_kb": 0, 00:20:52.697 "state": "online", 00:20:52.697 "raid_level": "raid1", 00:20:52.697 "superblock": true, 00:20:52.697 "num_base_bdevs": 2, 00:20:52.697 "num_base_bdevs_discovered": 1, 00:20:52.697 "num_base_bdevs_operational": 1, 00:20:52.697 "base_bdevs_list": [ 00:20:52.697 { 00:20:52.697 "name": null, 00:20:52.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.697 "is_configured": false, 00:20:52.697 "data_offset": 0, 00:20:52.697 "data_size": 7936 00:20:52.697 }, 00:20:52.697 { 00:20:52.697 "name": "BaseBdev2", 00:20:52.697 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:52.697 "is_configured": true, 00:20:52.697 "data_offset": 256, 00:20:52.697 "data_size": 7936 00:20:52.697 } 00:20:52.697 ] 00:20:52.697 }' 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.697 14:46:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.263 "name": "raid_bdev1", 00:20:53.263 "uuid": "e870a0b2-51fa-49f4-bd1a-8dcce7c220d4", 00:20:53.263 "strip_size_kb": 0, 00:20:53.263 "state": "online", 00:20:53.263 "raid_level": "raid1", 00:20:53.263 "superblock": true, 00:20:53.263 "num_base_bdevs": 2, 00:20:53.263 "num_base_bdevs_discovered": 1, 00:20:53.263 "num_base_bdevs_operational": 1, 00:20:53.263 "base_bdevs_list": [ 00:20:53.263 { 00:20:53.263 "name": null, 00:20:53.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.263 "is_configured": false, 00:20:53.263 "data_offset": 0, 00:20:53.263 "data_size": 7936 00:20:53.263 }, 00:20:53.263 { 00:20:53.263 "name": "BaseBdev2", 00:20:53.263 "uuid": "5201861e-c720-56f2-8963-6b3e16b6b715", 00:20:53.263 "is_configured": true, 00:20:53.263 "data_offset": 256, 00:20:53.263 "data_size": 7936 00:20:53.263 } 00:20:53.263 ] 00:20:53.263 }' 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:53.263 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.520 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:53.520 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86922 00:20:53.520 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86922 ']' 00:20:53.520 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86922 00:20:53.520 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:20:53.520 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:53.520 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86922 00:20:53.520 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:53.520 killing process with pid 86922 00:20:53.520 Received shutdown signal, test time was about 60.000000 seconds 00:20:53.521 00:20:53.521 Latency(us) 00:20:53.521 [2024-11-04T14:46:52.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.521 [2024-11-04T14:46:52.644Z] =================================================================================================================== 00:20:53.521 [2024-11-04T14:46:52.644Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.521 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:53.521 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86922' 00:20:53.521 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86922 00:20:53.521 [2024-11-04 14:46:52.436346] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:53.521 14:46:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86922 00:20:53.521 [2024-11-04 14:46:52.436494] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.521 [2024-11-04 14:46:52.436558] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.521 [2024-11-04 14:46:52.436578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:53.778 [2024-11-04 14:46:52.706140] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:54.711 14:46:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:20:54.711 ************************************ 00:20:54.711 END TEST raid_rebuild_test_sb_4k 00:20:54.711 ************************************ 00:20:54.711 00:20:54.711 real 0m21.576s 00:20:54.711 user 0m29.192s 00:20:54.711 sys 0m2.406s 00:20:54.711 14:46:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:54.711 14:46:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:54.711 14:46:53 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:20:54.711 14:46:53 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:20:54.711 14:46:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:54.711 14:46:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:54.711 14:46:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:54.711 ************************************ 00:20:54.711 START TEST raid_state_function_test_sb_md_separate 00:20:54.711 ************************************ 00:20:54.711 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:20:54.711 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87625 00:20:54.712 Process raid pid: 87625 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87625' 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87625 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87625 ']' 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:54.712 14:46:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:54.970 [2024-11-04 14:46:53.897634] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:20:54.970 [2024-11-04 14:46:53.897819] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.970 [2024-11-04 14:46:54.089115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.227 [2024-11-04 14:46:54.244495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.485 [2024-11-04 14:46:54.482295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:55.485 [2024-11-04 14:46:54.482357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:55.743 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:55.743 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:20:55.743 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:55.743 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.743 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.743 [2024-11-04 14:46:54.806376] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:55.744 [2024-11-04 14:46:54.806444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:55.744 [2024-11-04 14:46:54.806462] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:55.744 [2024-11-04 14:46:54.806479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.744 "name": "Existed_Raid", 00:20:55.744 "uuid": "b28cc659-e776-4e2f-be31-eea3e55c1b1e", 00:20:55.744 "strip_size_kb": 0, 00:20:55.744 "state": "configuring", 00:20:55.744 "raid_level": "raid1", 00:20:55.744 "superblock": true, 00:20:55.744 "num_base_bdevs": 2, 00:20:55.744 "num_base_bdevs_discovered": 0, 00:20:55.744 "num_base_bdevs_operational": 2, 00:20:55.744 "base_bdevs_list": [ 00:20:55.744 { 00:20:55.744 "name": "BaseBdev1", 00:20:55.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.744 "is_configured": false, 00:20:55.744 "data_offset": 0, 00:20:55.744 "data_size": 0 00:20:55.744 }, 00:20:55.744 { 00:20:55.744 "name": "BaseBdev2", 00:20:55.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.744 "is_configured": false, 00:20:55.744 "data_offset": 0, 00:20:55.744 "data_size": 0 00:20:55.744 } 00:20:55.744 ] 00:20:55.744 }' 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.744 14:46:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.310 [2024-11-04 14:46:55.358444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:56.310 [2024-11-04 14:46:55.358495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.310 [2024-11-04 14:46:55.366430] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:56.310 [2024-11-04 14:46:55.366486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:56.310 [2024-11-04 14:46:55.366502] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:56.310 [2024-11-04 14:46:55.366520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.310 [2024-11-04 14:46:55.412396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:56.310 BaseBdev1 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.310 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.569 [ 00:20:56.569 { 00:20:56.569 "name": "BaseBdev1", 00:20:56.569 "aliases": [ 00:20:56.569 "11aef564-e767-47d7-a2e6-9d5bc57f6bd9" 00:20:56.569 ], 00:20:56.569 "product_name": "Malloc disk", 00:20:56.569 "block_size": 4096, 00:20:56.569 "num_blocks": 8192, 00:20:56.569 "uuid": "11aef564-e767-47d7-a2e6-9d5bc57f6bd9", 00:20:56.569 "md_size": 32, 00:20:56.569 "md_interleave": false, 00:20:56.569 "dif_type": 0, 00:20:56.569 "assigned_rate_limits": { 00:20:56.569 "rw_ios_per_sec": 0, 00:20:56.569 "rw_mbytes_per_sec": 0, 00:20:56.569 "r_mbytes_per_sec": 0, 00:20:56.569 "w_mbytes_per_sec": 0 00:20:56.569 }, 00:20:56.569 "claimed": true, 00:20:56.569 "claim_type": "exclusive_write", 00:20:56.569 "zoned": false, 00:20:56.569 "supported_io_types": { 00:20:56.569 "read": true, 00:20:56.569 "write": true, 00:20:56.569 "unmap": true, 00:20:56.569 "flush": true, 00:20:56.569 "reset": true, 00:20:56.569 "nvme_admin": false, 00:20:56.569 "nvme_io": false, 00:20:56.569 "nvme_io_md": false, 00:20:56.569 "write_zeroes": true, 00:20:56.569 "zcopy": true, 00:20:56.569 "get_zone_info": false, 00:20:56.569 "zone_management": false, 00:20:56.569 "zone_append": false, 00:20:56.569 "compare": false, 00:20:56.569 "compare_and_write": false, 00:20:56.569 "abort": true, 00:20:56.569 "seek_hole": false, 00:20:56.569 "seek_data": false, 00:20:56.569 "copy": true, 00:20:56.569 "nvme_iov_md": false 00:20:56.569 }, 00:20:56.569 "memory_domains": [ 00:20:56.569 { 00:20:56.569 "dma_device_id": "system", 00:20:56.569 "dma_device_type": 1 00:20:56.569 }, 00:20:56.569 { 00:20:56.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.569 "dma_device_type": 2 00:20:56.569 } 00:20:56.569 ], 00:20:56.569 "driver_specific": {} 00:20:56.569 } 00:20:56.569 ] 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.569 "name": "Existed_Raid", 00:20:56.569 "uuid": "6341e135-e518-46ac-961b-7c37d3019ca1", 00:20:56.569 "strip_size_kb": 0, 00:20:56.569 "state": "configuring", 00:20:56.569 "raid_level": "raid1", 00:20:56.569 "superblock": true, 00:20:56.569 "num_base_bdevs": 2, 00:20:56.569 "num_base_bdevs_discovered": 1, 00:20:56.569 "num_base_bdevs_operational": 2, 00:20:56.569 "base_bdevs_list": [ 00:20:56.569 { 00:20:56.569 "name": "BaseBdev1", 00:20:56.569 "uuid": "11aef564-e767-47d7-a2e6-9d5bc57f6bd9", 00:20:56.569 "is_configured": true, 00:20:56.569 "data_offset": 256, 00:20:56.569 "data_size": 7936 00:20:56.569 }, 00:20:56.569 { 00:20:56.569 "name": "BaseBdev2", 00:20:56.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.569 "is_configured": false, 00:20:56.569 "data_offset": 0, 00:20:56.569 "data_size": 0 00:20:56.569 } 00:20:56.569 ] 00:20:56.569 }' 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.569 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.135 [2024-11-04 14:46:55.972640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:57.135 [2024-11-04 14:46:55.972708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.135 [2024-11-04 14:46:55.980664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:57.135 [2024-11-04 14:46:55.983088] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:57.135 [2024-11-04 14:46:55.983149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.135 14:46:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.135 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.135 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.135 "name": "Existed_Raid", 00:20:57.135 "uuid": "bfd9447a-f33b-4305-bf19-94846d45ed5f", 00:20:57.135 "strip_size_kb": 0, 00:20:57.135 "state": "configuring", 00:20:57.135 "raid_level": "raid1", 00:20:57.135 "superblock": true, 00:20:57.135 "num_base_bdevs": 2, 00:20:57.135 "num_base_bdevs_discovered": 1, 00:20:57.135 "num_base_bdevs_operational": 2, 00:20:57.135 "base_bdevs_list": [ 00:20:57.135 { 00:20:57.135 "name": "BaseBdev1", 00:20:57.135 "uuid": "11aef564-e767-47d7-a2e6-9d5bc57f6bd9", 00:20:57.135 "is_configured": true, 00:20:57.135 "data_offset": 256, 00:20:57.135 "data_size": 7936 00:20:57.135 }, 00:20:57.135 { 00:20:57.135 "name": "BaseBdev2", 00:20:57.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.135 "is_configured": false, 00:20:57.135 "data_offset": 0, 00:20:57.135 "data_size": 0 00:20:57.135 } 00:20:57.135 ] 00:20:57.135 }' 00:20:57.135 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.135 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.394 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:20:57.394 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.394 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.654 [2024-11-04 14:46:56.552264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:57.654 [2024-11-04 14:46:56.552567] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:57.654 [2024-11-04 14:46:56.552587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:57.654 [2024-11-04 14:46:56.552691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:57.654 [2024-11-04 14:46:56.552848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:57.654 [2024-11-04 14:46:56.552878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:57.654 BaseBdev2 00:20:57.654 [2024-11-04 14:46:56.553022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.654 [ 00:20:57.654 { 00:20:57.654 "name": "BaseBdev2", 00:20:57.654 "aliases": [ 00:20:57.654 "c9e19acc-af0a-4b72-973e-ef94fe8ec428" 00:20:57.654 ], 00:20:57.654 "product_name": "Malloc disk", 00:20:57.654 "block_size": 4096, 00:20:57.654 "num_blocks": 8192, 00:20:57.654 "uuid": "c9e19acc-af0a-4b72-973e-ef94fe8ec428", 00:20:57.654 "md_size": 32, 00:20:57.654 "md_interleave": false, 00:20:57.654 "dif_type": 0, 00:20:57.654 "assigned_rate_limits": { 00:20:57.654 "rw_ios_per_sec": 0, 00:20:57.654 "rw_mbytes_per_sec": 0, 00:20:57.654 "r_mbytes_per_sec": 0, 00:20:57.654 "w_mbytes_per_sec": 0 00:20:57.654 }, 00:20:57.654 "claimed": true, 00:20:57.654 "claim_type": "exclusive_write", 00:20:57.654 "zoned": false, 00:20:57.654 "supported_io_types": { 00:20:57.654 "read": true, 00:20:57.654 "write": true, 00:20:57.654 "unmap": true, 00:20:57.654 "flush": true, 00:20:57.654 "reset": true, 00:20:57.654 "nvme_admin": false, 00:20:57.654 "nvme_io": false, 00:20:57.654 "nvme_io_md": false, 00:20:57.654 "write_zeroes": true, 00:20:57.654 "zcopy": true, 00:20:57.654 "get_zone_info": false, 00:20:57.654 "zone_management": false, 00:20:57.654 "zone_append": false, 00:20:57.654 "compare": false, 00:20:57.654 "compare_and_write": false, 00:20:57.654 "abort": true, 00:20:57.654 "seek_hole": false, 00:20:57.654 "seek_data": false, 00:20:57.654 "copy": true, 00:20:57.654 "nvme_iov_md": false 00:20:57.654 }, 00:20:57.654 "memory_domains": [ 00:20:57.654 { 00:20:57.654 "dma_device_id": "system", 00:20:57.654 "dma_device_type": 1 00:20:57.654 }, 00:20:57.654 { 00:20:57.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:57.654 "dma_device_type": 2 00:20:57.654 } 00:20:57.654 ], 00:20:57.654 "driver_specific": {} 00:20:57.654 } 00:20:57.654 ] 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.654 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.654 "name": "Existed_Raid", 00:20:57.654 "uuid": "bfd9447a-f33b-4305-bf19-94846d45ed5f", 00:20:57.654 "strip_size_kb": 0, 00:20:57.654 "state": "online", 00:20:57.654 "raid_level": "raid1", 00:20:57.654 "superblock": true, 00:20:57.655 "num_base_bdevs": 2, 00:20:57.655 "num_base_bdevs_discovered": 2, 00:20:57.655 "num_base_bdevs_operational": 2, 00:20:57.655 "base_bdevs_list": [ 00:20:57.655 { 00:20:57.655 "name": "BaseBdev1", 00:20:57.655 "uuid": "11aef564-e767-47d7-a2e6-9d5bc57f6bd9", 00:20:57.655 "is_configured": true, 00:20:57.655 "data_offset": 256, 00:20:57.655 "data_size": 7936 00:20:57.655 }, 00:20:57.655 { 00:20:57.655 "name": "BaseBdev2", 00:20:57.655 "uuid": "c9e19acc-af0a-4b72-973e-ef94fe8ec428", 00:20:57.655 "is_configured": true, 00:20:57.655 "data_offset": 256, 00:20:57.655 "data_size": 7936 00:20:57.655 } 00:20:57.655 ] 00:20:57.655 }' 00:20:57.655 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.655 14:46:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.221 [2024-11-04 14:46:57.085810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.221 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:58.221 "name": "Existed_Raid", 00:20:58.221 "aliases": [ 00:20:58.221 "bfd9447a-f33b-4305-bf19-94846d45ed5f" 00:20:58.221 ], 00:20:58.221 "product_name": "Raid Volume", 00:20:58.221 "block_size": 4096, 00:20:58.221 "num_blocks": 7936, 00:20:58.221 "uuid": "bfd9447a-f33b-4305-bf19-94846d45ed5f", 00:20:58.221 "md_size": 32, 00:20:58.221 "md_interleave": false, 00:20:58.221 "dif_type": 0, 00:20:58.221 "assigned_rate_limits": { 00:20:58.221 "rw_ios_per_sec": 0, 00:20:58.221 "rw_mbytes_per_sec": 0, 00:20:58.221 "r_mbytes_per_sec": 0, 00:20:58.221 "w_mbytes_per_sec": 0 00:20:58.221 }, 00:20:58.221 "claimed": false, 00:20:58.221 "zoned": false, 00:20:58.221 "supported_io_types": { 00:20:58.221 "read": true, 00:20:58.221 "write": true, 00:20:58.221 "unmap": false, 00:20:58.221 "flush": false, 00:20:58.221 "reset": true, 00:20:58.221 "nvme_admin": false, 00:20:58.221 "nvme_io": false, 00:20:58.221 "nvme_io_md": false, 00:20:58.221 "write_zeroes": true, 00:20:58.221 "zcopy": false, 00:20:58.222 "get_zone_info": false, 00:20:58.222 "zone_management": false, 00:20:58.222 "zone_append": false, 00:20:58.222 "compare": false, 00:20:58.222 "compare_and_write": false, 00:20:58.222 "abort": false, 00:20:58.222 "seek_hole": false, 00:20:58.222 "seek_data": false, 00:20:58.222 "copy": false, 00:20:58.222 "nvme_iov_md": false 00:20:58.222 }, 00:20:58.222 "memory_domains": [ 00:20:58.222 { 00:20:58.222 "dma_device_id": "system", 00:20:58.222 "dma_device_type": 1 00:20:58.222 }, 00:20:58.222 { 00:20:58.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.222 "dma_device_type": 2 00:20:58.222 }, 00:20:58.222 { 00:20:58.222 "dma_device_id": "system", 00:20:58.222 "dma_device_type": 1 00:20:58.222 }, 00:20:58.222 { 00:20:58.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.222 "dma_device_type": 2 00:20:58.222 } 00:20:58.222 ], 00:20:58.222 "driver_specific": { 00:20:58.222 "raid": { 00:20:58.222 "uuid": "bfd9447a-f33b-4305-bf19-94846d45ed5f", 00:20:58.222 "strip_size_kb": 0, 00:20:58.222 "state": "online", 00:20:58.222 "raid_level": "raid1", 00:20:58.222 "superblock": true, 00:20:58.222 "num_base_bdevs": 2, 00:20:58.222 "num_base_bdevs_discovered": 2, 00:20:58.222 "num_base_bdevs_operational": 2, 00:20:58.222 "base_bdevs_list": [ 00:20:58.222 { 00:20:58.222 "name": "BaseBdev1", 00:20:58.222 "uuid": "11aef564-e767-47d7-a2e6-9d5bc57f6bd9", 00:20:58.222 "is_configured": true, 00:20:58.222 "data_offset": 256, 00:20:58.222 "data_size": 7936 00:20:58.222 }, 00:20:58.222 { 00:20:58.222 "name": "BaseBdev2", 00:20:58.222 "uuid": "c9e19acc-af0a-4b72-973e-ef94fe8ec428", 00:20:58.222 "is_configured": true, 00:20:58.222 "data_offset": 256, 00:20:58.222 "data_size": 7936 00:20:58.222 } 00:20:58.222 ] 00:20:58.222 } 00:20:58.222 } 00:20:58.222 }' 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:58.222 BaseBdev2' 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.222 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.480 [2024-11-04 14:46:57.345593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.480 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.480 "name": "Existed_Raid", 00:20:58.480 "uuid": "bfd9447a-f33b-4305-bf19-94846d45ed5f", 00:20:58.480 "strip_size_kb": 0, 00:20:58.480 "state": "online", 00:20:58.480 "raid_level": "raid1", 00:20:58.481 "superblock": true, 00:20:58.481 "num_base_bdevs": 2, 00:20:58.481 "num_base_bdevs_discovered": 1, 00:20:58.481 "num_base_bdevs_operational": 1, 00:20:58.481 "base_bdevs_list": [ 00:20:58.481 { 00:20:58.481 "name": null, 00:20:58.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.481 "is_configured": false, 00:20:58.481 "data_offset": 0, 00:20:58.481 "data_size": 7936 00:20:58.481 }, 00:20:58.481 { 00:20:58.481 "name": "BaseBdev2", 00:20:58.481 "uuid": "c9e19acc-af0a-4b72-973e-ef94fe8ec428", 00:20:58.481 "is_configured": true, 00:20:58.481 "data_offset": 256, 00:20:58.481 "data_size": 7936 00:20:58.481 } 00:20:58.481 ] 00:20:58.481 }' 00:20:58.481 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.481 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.048 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:59.048 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:59.048 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.048 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:59.048 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.048 14:46:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.048 [2024-11-04 14:46:58.048023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:59.048 [2024-11-04 14:46:58.048163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.048 [2024-11-04 14:46:58.142130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.048 [2024-11-04 14:46:58.142202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.048 [2024-11-04 14:46:58.142224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:59.048 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87625 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87625 ']' 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87625 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87625 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:59.306 killing process with pid 87625 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87625' 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87625 00:20:59.306 [2024-11-04 14:46:58.232194] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:59.306 14:46:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87625 00:20:59.306 [2024-11-04 14:46:58.246963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:00.253 14:46:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:21:00.253 00:21:00.253 real 0m5.508s 00:21:00.253 user 0m8.344s 00:21:00.253 sys 0m0.767s 00:21:00.253 14:46:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:00.253 14:46:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.253 ************************************ 00:21:00.253 END TEST raid_state_function_test_sb_md_separate 00:21:00.253 ************************************ 00:21:00.253 14:46:59 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:21:00.253 14:46:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:21:00.253 14:46:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:00.253 14:46:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:00.253 ************************************ 00:21:00.253 START TEST raid_superblock_test_md_separate 00:21:00.253 ************************************ 00:21:00.253 14:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:21:00.253 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:00.253 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:00.253 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:00.253 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87876 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87876 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87876 ']' 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:00.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:00.254 14:46:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:00.512 [2024-11-04 14:46:59.428037] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:21:00.512 [2024-11-04 14:46:59.428204] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87876 ] 00:21:00.512 [2024-11-04 14:46:59.602749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.770 [2024-11-04 14:46:59.756220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.028 [2024-11-04 14:46:59.959378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.028 [2024-11-04 14:46:59.959458] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:01.596 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.597 malloc1 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.597 [2024-11-04 14:47:00.576000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:01.597 [2024-11-04 14:47:00.576101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.597 [2024-11-04 14:47:00.576150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:01.597 [2024-11-04 14:47:00.576178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.597 [2024-11-04 14:47:00.578846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.597 [2024-11-04 14:47:00.578908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:01.597 pt1 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.597 malloc2 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.597 [2024-11-04 14:47:00.643899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:01.597 [2024-11-04 14:47:00.643994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.597 [2024-11-04 14:47:00.644027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:01.597 [2024-11-04 14:47:00.644042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.597 [2024-11-04 14:47:00.646573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.597 [2024-11-04 14:47:00.646750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:01.597 pt2 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.597 [2024-11-04 14:47:00.655943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:01.597 [2024-11-04 14:47:00.658347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:01.597 [2024-11-04 14:47:00.658581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:01.597 [2024-11-04 14:47:00.658604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:01.597 [2024-11-04 14:47:00.658708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:01.597 [2024-11-04 14:47:00.658871] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:01.597 [2024-11-04 14:47:00.658892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:01.597 [2024-11-04 14:47:00.659055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:01.597 "name": "raid_bdev1", 00:21:01.597 "uuid": "68317c42-3378-42dc-a3bc-4f61a40fb8fb", 00:21:01.597 "strip_size_kb": 0, 00:21:01.597 "state": "online", 00:21:01.597 "raid_level": "raid1", 00:21:01.597 "superblock": true, 00:21:01.597 "num_base_bdevs": 2, 00:21:01.597 "num_base_bdevs_discovered": 2, 00:21:01.597 "num_base_bdevs_operational": 2, 00:21:01.597 "base_bdevs_list": [ 00:21:01.597 { 00:21:01.597 "name": "pt1", 00:21:01.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:01.597 "is_configured": true, 00:21:01.597 "data_offset": 256, 00:21:01.597 "data_size": 7936 00:21:01.597 }, 00:21:01.597 { 00:21:01.597 "name": "pt2", 00:21:01.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:01.597 "is_configured": true, 00:21:01.597 "data_offset": 256, 00:21:01.597 "data_size": 7936 00:21:01.597 } 00:21:01.597 ] 00:21:01.597 }' 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:01.597 14:47:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.167 [2024-11-04 14:47:01.156400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:02.167 "name": "raid_bdev1", 00:21:02.167 "aliases": [ 00:21:02.167 "68317c42-3378-42dc-a3bc-4f61a40fb8fb" 00:21:02.167 ], 00:21:02.167 "product_name": "Raid Volume", 00:21:02.167 "block_size": 4096, 00:21:02.167 "num_blocks": 7936, 00:21:02.167 "uuid": "68317c42-3378-42dc-a3bc-4f61a40fb8fb", 00:21:02.167 "md_size": 32, 00:21:02.167 "md_interleave": false, 00:21:02.167 "dif_type": 0, 00:21:02.167 "assigned_rate_limits": { 00:21:02.167 "rw_ios_per_sec": 0, 00:21:02.167 "rw_mbytes_per_sec": 0, 00:21:02.167 "r_mbytes_per_sec": 0, 00:21:02.167 "w_mbytes_per_sec": 0 00:21:02.167 }, 00:21:02.167 "claimed": false, 00:21:02.167 "zoned": false, 00:21:02.167 "supported_io_types": { 00:21:02.167 "read": true, 00:21:02.167 "write": true, 00:21:02.167 "unmap": false, 00:21:02.167 "flush": false, 00:21:02.167 "reset": true, 00:21:02.167 "nvme_admin": false, 00:21:02.167 "nvme_io": false, 00:21:02.167 "nvme_io_md": false, 00:21:02.167 "write_zeroes": true, 00:21:02.167 "zcopy": false, 00:21:02.167 "get_zone_info": false, 00:21:02.167 "zone_management": false, 00:21:02.167 "zone_append": false, 00:21:02.167 "compare": false, 00:21:02.167 "compare_and_write": false, 00:21:02.167 "abort": false, 00:21:02.167 "seek_hole": false, 00:21:02.167 "seek_data": false, 00:21:02.167 "copy": false, 00:21:02.167 "nvme_iov_md": false 00:21:02.167 }, 00:21:02.167 "memory_domains": [ 00:21:02.167 { 00:21:02.167 "dma_device_id": "system", 00:21:02.167 "dma_device_type": 1 00:21:02.167 }, 00:21:02.167 { 00:21:02.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.167 "dma_device_type": 2 00:21:02.167 }, 00:21:02.167 { 00:21:02.167 "dma_device_id": "system", 00:21:02.167 "dma_device_type": 1 00:21:02.167 }, 00:21:02.167 { 00:21:02.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.167 "dma_device_type": 2 00:21:02.167 } 00:21:02.167 ], 00:21:02.167 "driver_specific": { 00:21:02.167 "raid": { 00:21:02.167 "uuid": "68317c42-3378-42dc-a3bc-4f61a40fb8fb", 00:21:02.167 "strip_size_kb": 0, 00:21:02.167 "state": "online", 00:21:02.167 "raid_level": "raid1", 00:21:02.167 "superblock": true, 00:21:02.167 "num_base_bdevs": 2, 00:21:02.167 "num_base_bdevs_discovered": 2, 00:21:02.167 "num_base_bdevs_operational": 2, 00:21:02.167 "base_bdevs_list": [ 00:21:02.167 { 00:21:02.167 "name": "pt1", 00:21:02.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.167 "is_configured": true, 00:21:02.167 "data_offset": 256, 00:21:02.167 "data_size": 7936 00:21:02.167 }, 00:21:02.167 { 00:21:02.167 "name": "pt2", 00:21:02.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.167 "is_configured": true, 00:21:02.167 "data_offset": 256, 00:21:02.167 "data_size": 7936 00:21:02.167 } 00:21:02.167 ] 00:21:02.167 } 00:21:02.167 } 00:21:02.167 }' 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:02.167 pt2' 00:21:02.167 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:02.426 [2024-11-04 14:47:01.428390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=68317c42-3378-42dc-a3bc-4f61a40fb8fb 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 68317c42-3378-42dc-a3bc-4f61a40fb8fb ']' 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.426 [2024-11-04 14:47:01.472067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.426 [2024-11-04 14:47:01.472094] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:02.426 [2024-11-04 14:47:01.472197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:02.426 [2024-11-04 14:47:01.472271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:02.426 [2024-11-04 14:47:01.472300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.426 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.685 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.685 [2024-11-04 14:47:01.608145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:02.685 [2024-11-04 14:47:01.610659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:02.685 [2024-11-04 14:47:01.610767] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:02.685 [2024-11-04 14:47:01.610851] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:02.685 [2024-11-04 14:47:01.610878] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:02.685 [2024-11-04 14:47:01.610894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:02.685 request: 00:21:02.685 { 00:21:02.685 "name": "raid_bdev1", 00:21:02.685 "raid_level": "raid1", 00:21:02.685 "base_bdevs": [ 00:21:02.685 "malloc1", 00:21:02.686 "malloc2" 00:21:02.686 ], 00:21:02.686 "superblock": false, 00:21:02.686 "method": "bdev_raid_create", 00:21:02.686 "req_id": 1 00:21:02.686 } 00:21:02.686 Got JSON-RPC error response 00:21:02.686 response: 00:21:02.686 { 00:21:02.686 "code": -17, 00:21:02.686 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:02.686 } 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.686 [2024-11-04 14:47:01.680165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:02.686 [2024-11-04 14:47:01.680384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.686 [2024-11-04 14:47:01.680422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:02.686 [2024-11-04 14:47:01.680441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.686 [2024-11-04 14:47:01.683071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.686 [2024-11-04 14:47:01.683123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:02.686 [2024-11-04 14:47:01.683196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:02.686 [2024-11-04 14:47:01.683270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:02.686 pt1 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.686 "name": "raid_bdev1", 00:21:02.686 "uuid": "68317c42-3378-42dc-a3bc-4f61a40fb8fb", 00:21:02.686 "strip_size_kb": 0, 00:21:02.686 "state": "configuring", 00:21:02.686 "raid_level": "raid1", 00:21:02.686 "superblock": true, 00:21:02.686 "num_base_bdevs": 2, 00:21:02.686 "num_base_bdevs_discovered": 1, 00:21:02.686 "num_base_bdevs_operational": 2, 00:21:02.686 "base_bdevs_list": [ 00:21:02.686 { 00:21:02.686 "name": "pt1", 00:21:02.686 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.686 "is_configured": true, 00:21:02.686 "data_offset": 256, 00:21:02.686 "data_size": 7936 00:21:02.686 }, 00:21:02.686 { 00:21:02.686 "name": null, 00:21:02.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.686 "is_configured": false, 00:21:02.686 "data_offset": 256, 00:21:02.686 "data_size": 7936 00:21:02.686 } 00:21:02.686 ] 00:21:02.686 }' 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.686 14:47:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.256 [2024-11-04 14:47:02.208287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:03.256 [2024-11-04 14:47:02.208395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.256 [2024-11-04 14:47:02.208427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:03.256 [2024-11-04 14:47:02.208445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.256 [2024-11-04 14:47:02.208715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.256 [2024-11-04 14:47:02.208746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:03.256 [2024-11-04 14:47:02.208812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:03.256 [2024-11-04 14:47:02.208847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:03.256 [2024-11-04 14:47:02.209005] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:03.256 [2024-11-04 14:47:02.209027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:03.256 [2024-11-04 14:47:02.209114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:03.256 [2024-11-04 14:47:02.209263] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:03.256 [2024-11-04 14:47:02.209278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:03.256 [2024-11-04 14:47:02.209396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.256 pt2 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.256 "name": "raid_bdev1", 00:21:03.256 "uuid": "68317c42-3378-42dc-a3bc-4f61a40fb8fb", 00:21:03.256 "strip_size_kb": 0, 00:21:03.256 "state": "online", 00:21:03.256 "raid_level": "raid1", 00:21:03.256 "superblock": true, 00:21:03.256 "num_base_bdevs": 2, 00:21:03.256 "num_base_bdevs_discovered": 2, 00:21:03.256 "num_base_bdevs_operational": 2, 00:21:03.256 "base_bdevs_list": [ 00:21:03.256 { 00:21:03.256 "name": "pt1", 00:21:03.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:03.256 "is_configured": true, 00:21:03.256 "data_offset": 256, 00:21:03.256 "data_size": 7936 00:21:03.256 }, 00:21:03.256 { 00:21:03.256 "name": "pt2", 00:21:03.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.256 "is_configured": true, 00:21:03.256 "data_offset": 256, 00:21:03.256 "data_size": 7936 00:21:03.256 } 00:21:03.256 ] 00:21:03.256 }' 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.256 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.823 [2024-11-04 14:47:02.760786] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:03.823 "name": "raid_bdev1", 00:21:03.823 "aliases": [ 00:21:03.823 "68317c42-3378-42dc-a3bc-4f61a40fb8fb" 00:21:03.823 ], 00:21:03.823 "product_name": "Raid Volume", 00:21:03.823 "block_size": 4096, 00:21:03.823 "num_blocks": 7936, 00:21:03.823 "uuid": "68317c42-3378-42dc-a3bc-4f61a40fb8fb", 00:21:03.823 "md_size": 32, 00:21:03.823 "md_interleave": false, 00:21:03.823 "dif_type": 0, 00:21:03.823 "assigned_rate_limits": { 00:21:03.823 "rw_ios_per_sec": 0, 00:21:03.823 "rw_mbytes_per_sec": 0, 00:21:03.823 "r_mbytes_per_sec": 0, 00:21:03.823 "w_mbytes_per_sec": 0 00:21:03.823 }, 00:21:03.823 "claimed": false, 00:21:03.823 "zoned": false, 00:21:03.823 "supported_io_types": { 00:21:03.823 "read": true, 00:21:03.823 "write": true, 00:21:03.823 "unmap": false, 00:21:03.823 "flush": false, 00:21:03.823 "reset": true, 00:21:03.823 "nvme_admin": false, 00:21:03.823 "nvme_io": false, 00:21:03.823 "nvme_io_md": false, 00:21:03.823 "write_zeroes": true, 00:21:03.823 "zcopy": false, 00:21:03.823 "get_zone_info": false, 00:21:03.823 "zone_management": false, 00:21:03.823 "zone_append": false, 00:21:03.823 "compare": false, 00:21:03.823 "compare_and_write": false, 00:21:03.823 "abort": false, 00:21:03.823 "seek_hole": false, 00:21:03.823 "seek_data": false, 00:21:03.823 "copy": false, 00:21:03.823 "nvme_iov_md": false 00:21:03.823 }, 00:21:03.823 "memory_domains": [ 00:21:03.823 { 00:21:03.823 "dma_device_id": "system", 00:21:03.823 "dma_device_type": 1 00:21:03.823 }, 00:21:03.823 { 00:21:03.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.823 "dma_device_type": 2 00:21:03.823 }, 00:21:03.823 { 00:21:03.823 "dma_device_id": "system", 00:21:03.823 "dma_device_type": 1 00:21:03.823 }, 00:21:03.823 { 00:21:03.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.823 "dma_device_type": 2 00:21:03.823 } 00:21:03.823 ], 00:21:03.823 "driver_specific": { 00:21:03.823 "raid": { 00:21:03.823 "uuid": "68317c42-3378-42dc-a3bc-4f61a40fb8fb", 00:21:03.823 "strip_size_kb": 0, 00:21:03.823 "state": "online", 00:21:03.823 "raid_level": "raid1", 00:21:03.823 "superblock": true, 00:21:03.823 "num_base_bdevs": 2, 00:21:03.823 "num_base_bdevs_discovered": 2, 00:21:03.823 "num_base_bdevs_operational": 2, 00:21:03.823 "base_bdevs_list": [ 00:21:03.823 { 00:21:03.823 "name": "pt1", 00:21:03.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:03.823 "is_configured": true, 00:21:03.823 "data_offset": 256, 00:21:03.823 "data_size": 7936 00:21:03.823 }, 00:21:03.823 { 00:21:03.823 "name": "pt2", 00:21:03.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.823 "is_configured": true, 00:21:03.823 "data_offset": 256, 00:21:03.823 "data_size": 7936 00:21:03.823 } 00:21:03.823 ] 00:21:03.823 } 00:21:03.823 } 00:21:03.823 }' 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:03.823 pt2' 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:03.823 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.082 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:04.082 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:04.082 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:04.082 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:04.082 14:47:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:04.082 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.082 14:47:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.082 [2024-11-04 14:47:03.048868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 68317c42-3378-42dc-a3bc-4f61a40fb8fb '!=' 68317c42-3378-42dc-a3bc-4f61a40fb8fb ']' 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:21:04.082 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.083 [2024-11-04 14:47:03.096608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.083 "name": "raid_bdev1", 00:21:04.083 "uuid": "68317c42-3378-42dc-a3bc-4f61a40fb8fb", 00:21:04.083 "strip_size_kb": 0, 00:21:04.083 "state": "online", 00:21:04.083 "raid_level": "raid1", 00:21:04.083 "superblock": true, 00:21:04.083 "num_base_bdevs": 2, 00:21:04.083 "num_base_bdevs_discovered": 1, 00:21:04.083 "num_base_bdevs_operational": 1, 00:21:04.083 "base_bdevs_list": [ 00:21:04.083 { 00:21:04.083 "name": null, 00:21:04.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.083 "is_configured": false, 00:21:04.083 "data_offset": 0, 00:21:04.083 "data_size": 7936 00:21:04.083 }, 00:21:04.083 { 00:21:04.083 "name": "pt2", 00:21:04.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.083 "is_configured": true, 00:21:04.083 "data_offset": 256, 00:21:04.083 "data_size": 7936 00:21:04.083 } 00:21:04.083 ] 00:21:04.083 }' 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.083 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.650 [2024-11-04 14:47:03.592682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.650 [2024-11-04 14:47:03.592865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.650 [2024-11-04 14:47:03.593003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.650 [2024-11-04 14:47:03.593076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.650 [2024-11-04 14:47:03.593097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.650 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.650 [2024-11-04 14:47:03.656719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:04.650 [2024-11-04 14:47:03.656811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.650 [2024-11-04 14:47:03.656842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:04.650 [2024-11-04 14:47:03.656860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.650 [2024-11-04 14:47:03.659538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.650 [2024-11-04 14:47:03.659728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:04.650 [2024-11-04 14:47:03.659816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:04.650 [2024-11-04 14:47:03.659885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:04.650 [2024-11-04 14:47:03.660026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:04.651 [2024-11-04 14:47:03.660050] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:04.651 [2024-11-04 14:47:03.660142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:04.651 [2024-11-04 14:47:03.660294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:04.651 [2024-11-04 14:47:03.660309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:04.651 [2024-11-04 14:47:03.660432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.651 pt2 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.651 "name": "raid_bdev1", 00:21:04.651 "uuid": "68317c42-3378-42dc-a3bc-4f61a40fb8fb", 00:21:04.651 "strip_size_kb": 0, 00:21:04.651 "state": "online", 00:21:04.651 "raid_level": "raid1", 00:21:04.651 "superblock": true, 00:21:04.651 "num_base_bdevs": 2, 00:21:04.651 "num_base_bdevs_discovered": 1, 00:21:04.651 "num_base_bdevs_operational": 1, 00:21:04.651 "base_bdevs_list": [ 00:21:04.651 { 00:21:04.651 "name": null, 00:21:04.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.651 "is_configured": false, 00:21:04.651 "data_offset": 256, 00:21:04.651 "data_size": 7936 00:21:04.651 }, 00:21:04.651 { 00:21:04.651 "name": "pt2", 00:21:04.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.651 "is_configured": true, 00:21:04.651 "data_offset": 256, 00:21:04.651 "data_size": 7936 00:21:04.651 } 00:21:04.651 ] 00:21:04.651 }' 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.651 14:47:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.218 [2024-11-04 14:47:04.188837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.218 [2024-11-04 14:47:04.188876] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:05.218 [2024-11-04 14:47:04.188997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.218 [2024-11-04 14:47:04.189068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.218 [2024-11-04 14:47:04.189084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.218 [2024-11-04 14:47:04.248883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:05.218 [2024-11-04 14:47:04.249119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.218 [2024-11-04 14:47:04.249279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:05.218 [2024-11-04 14:47:04.249400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.218 [2024-11-04 14:47:04.252088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.218 [2024-11-04 14:47:04.252257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:05.218 [2024-11-04 14:47:04.252360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:05.218 [2024-11-04 14:47:04.252423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:05.218 [2024-11-04 14:47:04.252591] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:05.218 [2024-11-04 14:47:04.252609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.218 [2024-11-04 14:47:04.252634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:05.218 [2024-11-04 14:47:04.252713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:05.218 [2024-11-04 14:47:04.252810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:05.218 [2024-11-04 14:47:04.252825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:05.218 [2024-11-04 14:47:04.252917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:05.218 [2024-11-04 14:47:04.253080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:05.218 [2024-11-04 14:47:04.253100] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:05.218 [2024-11-04 14:47:04.253282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.218 pt1 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.218 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.218 "name": "raid_bdev1", 00:21:05.218 "uuid": "68317c42-3378-42dc-a3bc-4f61a40fb8fb", 00:21:05.218 "strip_size_kb": 0, 00:21:05.218 "state": "online", 00:21:05.219 "raid_level": "raid1", 00:21:05.219 "superblock": true, 00:21:05.219 "num_base_bdevs": 2, 00:21:05.219 "num_base_bdevs_discovered": 1, 00:21:05.219 "num_base_bdevs_operational": 1, 00:21:05.219 "base_bdevs_list": [ 00:21:05.219 { 00:21:05.219 "name": null, 00:21:05.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.219 "is_configured": false, 00:21:05.219 "data_offset": 256, 00:21:05.219 "data_size": 7936 00:21:05.219 }, 00:21:05.219 { 00:21:05.219 "name": "pt2", 00:21:05.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:05.219 "is_configured": true, 00:21:05.219 "data_offset": 256, 00:21:05.219 "data_size": 7936 00:21:05.219 } 00:21:05.219 ] 00:21:05.219 }' 00:21:05.219 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.219 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:05.785 [2024-11-04 14:47:04.785365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 68317c42-3378-42dc-a3bc-4f61a40fb8fb '!=' 68317c42-3378-42dc-a3bc-4f61a40fb8fb ']' 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87876 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87876 ']' 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87876 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87876 00:21:05.785 killing process with pid 87876 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87876' 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87876 00:21:05.785 [2024-11-04 14:47:04.856171] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:05.785 [2024-11-04 14:47:04.856273] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:05.785 14:47:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87876 00:21:05.785 [2024-11-04 14:47:04.856345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:05.785 [2024-11-04 14:47:04.856372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:06.043 [2024-11-04 14:47:05.058691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:06.977 14:47:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:21:06.977 00:21:06.977 real 0m6.751s 00:21:06.977 user 0m10.735s 00:21:06.977 sys 0m0.944s 00:21:06.977 14:47:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:06.977 ************************************ 00:21:06.977 END TEST raid_superblock_test_md_separate 00:21:06.977 ************************************ 00:21:06.977 14:47:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.236 14:47:06 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:21:07.236 14:47:06 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:21:07.236 14:47:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:07.236 14:47:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:07.236 14:47:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:07.236 ************************************ 00:21:07.236 START TEST raid_rebuild_test_sb_md_separate 00:21:07.236 ************************************ 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:07.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88206 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88206 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88206 ']' 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:07.236 14:47:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:07.236 [2024-11-04 14:47:06.251990] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:21:07.236 [2024-11-04 14:47:06.252420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88206 ] 00:21:07.236 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:07.236 Zero copy mechanism will not be used. 00:21:07.495 [2024-11-04 14:47:06.444182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.754 [2024-11-04 14:47:06.623812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.754 [2024-11-04 14:47:06.832887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:07.754 [2024-11-04 14:47:06.833125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.326 BaseBdev1_malloc 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.326 [2024-11-04 14:47:07.341992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:08.326 [2024-11-04 14:47:07.342116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.326 [2024-11-04 14:47:07.342149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:08.326 [2024-11-04 14:47:07.342167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.326 [2024-11-04 14:47:07.344676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.326 [2024-11-04 14:47:07.344718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:08.326 BaseBdev1 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.326 BaseBdev2_malloc 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.326 [2024-11-04 14:47:07.399074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:08.326 [2024-11-04 14:47:07.399140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.326 [2024-11-04 14:47:07.399166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:08.326 [2024-11-04 14:47:07.399184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.326 [2024-11-04 14:47:07.401627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.326 [2024-11-04 14:47:07.401671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:08.326 BaseBdev2 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.326 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.586 spare_malloc 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.586 spare_delay 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.586 [2024-11-04 14:47:07.472328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:08.586 [2024-11-04 14:47:07.472426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.586 [2024-11-04 14:47:07.472455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:08.586 [2024-11-04 14:47:07.472471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.586 [2024-11-04 14:47:07.475133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.586 [2024-11-04 14:47:07.475196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:08.586 spare 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.586 [2024-11-04 14:47:07.484391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:08.586 [2024-11-04 14:47:07.486906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:08.586 [2024-11-04 14:47:07.487198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:08.586 [2024-11-04 14:47:07.487221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:08.586 [2024-11-04 14:47:07.487319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:08.586 [2024-11-04 14:47:07.487469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:08.586 [2024-11-04 14:47:07.487483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:08.586 [2024-11-04 14:47:07.487649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.586 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.586 "name": "raid_bdev1", 00:21:08.586 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:08.586 "strip_size_kb": 0, 00:21:08.586 "state": "online", 00:21:08.587 "raid_level": "raid1", 00:21:08.587 "superblock": true, 00:21:08.587 "num_base_bdevs": 2, 00:21:08.587 "num_base_bdevs_discovered": 2, 00:21:08.587 "num_base_bdevs_operational": 2, 00:21:08.587 "base_bdevs_list": [ 00:21:08.587 { 00:21:08.587 "name": "BaseBdev1", 00:21:08.587 "uuid": "52249d8b-de89-5dc2-b670-a69181684244", 00:21:08.587 "is_configured": true, 00:21:08.587 "data_offset": 256, 00:21:08.587 "data_size": 7936 00:21:08.587 }, 00:21:08.587 { 00:21:08.587 "name": "BaseBdev2", 00:21:08.587 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:08.587 "is_configured": true, 00:21:08.587 "data_offset": 256, 00:21:08.587 "data_size": 7936 00:21:08.587 } 00:21:08.587 ] 00:21:08.587 }' 00:21:08.587 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.587 14:47:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:09.153 [2024-11-04 14:47:08.012852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:09.153 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:09.411 [2024-11-04 14:47:08.404725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:09.411 /dev/nbd0 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:09.411 1+0 records in 00:21:09.411 1+0 records out 00:21:09.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002095 s, 19.6 MB/s 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:09.411 14:47:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:10.346 7936+0 records in 00:21:10.346 7936+0 records out 00:21:10.346 32505856 bytes (33 MB, 31 MiB) copied, 0.915063 s, 35.5 MB/s 00:21:10.346 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:10.346 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:10.346 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:10.346 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:10.346 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:10.346 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:10.346 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:10.609 [2024-11-04 14:47:09.681561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:10.609 [2024-11-04 14:47:09.704155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.609 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.868 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:10.868 "name": "raid_bdev1", 00:21:10.868 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:10.868 "strip_size_kb": 0, 00:21:10.868 "state": "online", 00:21:10.868 "raid_level": "raid1", 00:21:10.868 "superblock": true, 00:21:10.868 "num_base_bdevs": 2, 00:21:10.868 "num_base_bdevs_discovered": 1, 00:21:10.868 "num_base_bdevs_operational": 1, 00:21:10.868 "base_bdevs_list": [ 00:21:10.868 { 00:21:10.868 "name": null, 00:21:10.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.868 "is_configured": false, 00:21:10.868 "data_offset": 0, 00:21:10.868 "data_size": 7936 00:21:10.868 }, 00:21:10.868 { 00:21:10.868 "name": "BaseBdev2", 00:21:10.868 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:10.868 "is_configured": true, 00:21:10.868 "data_offset": 256, 00:21:10.868 "data_size": 7936 00:21:10.868 } 00:21:10.868 ] 00:21:10.868 }' 00:21:10.868 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:10.868 14:47:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 14:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:11.126 14:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.126 14:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:11.126 [2024-11-04 14:47:10.216301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:11.126 [2024-11-04 14:47:10.229846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:11.126 14:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.126 14:47:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:11.126 [2024-11-04 14:47:10.232303] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.505 "name": "raid_bdev1", 00:21:12.505 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:12.505 "strip_size_kb": 0, 00:21:12.505 "state": "online", 00:21:12.505 "raid_level": "raid1", 00:21:12.505 "superblock": true, 00:21:12.505 "num_base_bdevs": 2, 00:21:12.505 "num_base_bdevs_discovered": 2, 00:21:12.505 "num_base_bdevs_operational": 2, 00:21:12.505 "process": { 00:21:12.505 "type": "rebuild", 00:21:12.505 "target": "spare", 00:21:12.505 "progress": { 00:21:12.505 "blocks": 2560, 00:21:12.505 "percent": 32 00:21:12.505 } 00:21:12.505 }, 00:21:12.505 "base_bdevs_list": [ 00:21:12.505 { 00:21:12.505 "name": "spare", 00:21:12.505 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:12.505 "is_configured": true, 00:21:12.505 "data_offset": 256, 00:21:12.505 "data_size": 7936 00:21:12.505 }, 00:21:12.505 { 00:21:12.505 "name": "BaseBdev2", 00:21:12.505 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:12.505 "is_configured": true, 00:21:12.505 "data_offset": 256, 00:21:12.505 "data_size": 7936 00:21:12.505 } 00:21:12.505 ] 00:21:12.505 }' 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.505 [2024-11-04 14:47:11.397789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:12.505 [2024-11-04 14:47:11.441179] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:12.505 [2024-11-04 14:47:11.441270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.505 [2024-11-04 14:47:11.441293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:12.505 [2024-11-04 14:47:11.441308] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.505 "name": "raid_bdev1", 00:21:12.505 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:12.505 "strip_size_kb": 0, 00:21:12.505 "state": "online", 00:21:12.505 "raid_level": "raid1", 00:21:12.505 "superblock": true, 00:21:12.505 "num_base_bdevs": 2, 00:21:12.505 "num_base_bdevs_discovered": 1, 00:21:12.505 "num_base_bdevs_operational": 1, 00:21:12.505 "base_bdevs_list": [ 00:21:12.505 { 00:21:12.505 "name": null, 00:21:12.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.505 "is_configured": false, 00:21:12.505 "data_offset": 0, 00:21:12.505 "data_size": 7936 00:21:12.505 }, 00:21:12.505 { 00:21:12.505 "name": "BaseBdev2", 00:21:12.505 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:12.505 "is_configured": true, 00:21:12.505 "data_offset": 256, 00:21:12.505 "data_size": 7936 00:21:12.505 } 00:21:12.505 ] 00:21:12.505 }' 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.505 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.073 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.073 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.073 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:13.073 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:13.073 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.073 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.073 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.073 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.073 14:47:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.073 "name": "raid_bdev1", 00:21:13.073 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:13.073 "strip_size_kb": 0, 00:21:13.073 "state": "online", 00:21:13.073 "raid_level": "raid1", 00:21:13.073 "superblock": true, 00:21:13.073 "num_base_bdevs": 2, 00:21:13.073 "num_base_bdevs_discovered": 1, 00:21:13.073 "num_base_bdevs_operational": 1, 00:21:13.073 "base_bdevs_list": [ 00:21:13.073 { 00:21:13.073 "name": null, 00:21:13.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.073 "is_configured": false, 00:21:13.073 "data_offset": 0, 00:21:13.073 "data_size": 7936 00:21:13.073 }, 00:21:13.073 { 00:21:13.073 "name": "BaseBdev2", 00:21:13.073 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:13.073 "is_configured": true, 00:21:13.073 "data_offset": 256, 00:21:13.073 "data_size": 7936 00:21:13.073 } 00:21:13.073 ] 00:21:13.073 }' 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:13.073 [2024-11-04 14:47:12.159594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:13.073 [2024-11-04 14:47:12.172611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.073 14:47:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:13.073 [2024-11-04 14:47:12.175222] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.448 "name": "raid_bdev1", 00:21:14.448 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:14.448 "strip_size_kb": 0, 00:21:14.448 "state": "online", 00:21:14.448 "raid_level": "raid1", 00:21:14.448 "superblock": true, 00:21:14.448 "num_base_bdevs": 2, 00:21:14.448 "num_base_bdevs_discovered": 2, 00:21:14.448 "num_base_bdevs_operational": 2, 00:21:14.448 "process": { 00:21:14.448 "type": "rebuild", 00:21:14.448 "target": "spare", 00:21:14.448 "progress": { 00:21:14.448 "blocks": 2560, 00:21:14.448 "percent": 32 00:21:14.448 } 00:21:14.448 }, 00:21:14.448 "base_bdevs_list": [ 00:21:14.448 { 00:21:14.448 "name": "spare", 00:21:14.448 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:14.448 "is_configured": true, 00:21:14.448 "data_offset": 256, 00:21:14.448 "data_size": 7936 00:21:14.448 }, 00:21:14.448 { 00:21:14.448 "name": "BaseBdev2", 00:21:14.448 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:14.448 "is_configured": true, 00:21:14.448 "data_offset": 256, 00:21:14.448 "data_size": 7936 00:21:14.448 } 00:21:14.448 ] 00:21:14.448 }' 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:14.448 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=766 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:14.448 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:14.449 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.449 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.449 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:14.449 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.449 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:14.449 "name": "raid_bdev1", 00:21:14.449 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:14.449 "strip_size_kb": 0, 00:21:14.449 "state": "online", 00:21:14.449 "raid_level": "raid1", 00:21:14.449 "superblock": true, 00:21:14.449 "num_base_bdevs": 2, 00:21:14.449 "num_base_bdevs_discovered": 2, 00:21:14.449 "num_base_bdevs_operational": 2, 00:21:14.449 "process": { 00:21:14.449 "type": "rebuild", 00:21:14.449 "target": "spare", 00:21:14.449 "progress": { 00:21:14.449 "blocks": 2816, 00:21:14.449 "percent": 35 00:21:14.449 } 00:21:14.449 }, 00:21:14.449 "base_bdevs_list": [ 00:21:14.449 { 00:21:14.449 "name": "spare", 00:21:14.449 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:14.449 "is_configured": true, 00:21:14.449 "data_offset": 256, 00:21:14.449 "data_size": 7936 00:21:14.449 }, 00:21:14.449 { 00:21:14.449 "name": "BaseBdev2", 00:21:14.449 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:14.449 "is_configured": true, 00:21:14.449 "data_offset": 256, 00:21:14.449 "data_size": 7936 00:21:14.449 } 00:21:14.449 ] 00:21:14.449 }' 00:21:14.449 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:14.449 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.449 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:14.449 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.449 14:47:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:15.384 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.643 "name": "raid_bdev1", 00:21:15.643 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:15.643 "strip_size_kb": 0, 00:21:15.643 "state": "online", 00:21:15.643 "raid_level": "raid1", 00:21:15.643 "superblock": true, 00:21:15.643 "num_base_bdevs": 2, 00:21:15.643 "num_base_bdevs_discovered": 2, 00:21:15.643 "num_base_bdevs_operational": 2, 00:21:15.643 "process": { 00:21:15.643 "type": "rebuild", 00:21:15.643 "target": "spare", 00:21:15.643 "progress": { 00:21:15.643 "blocks": 5888, 00:21:15.643 "percent": 74 00:21:15.643 } 00:21:15.643 }, 00:21:15.643 "base_bdevs_list": [ 00:21:15.643 { 00:21:15.643 "name": "spare", 00:21:15.643 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:15.643 "is_configured": true, 00:21:15.643 "data_offset": 256, 00:21:15.643 "data_size": 7936 00:21:15.643 }, 00:21:15.643 { 00:21:15.643 "name": "BaseBdev2", 00:21:15.643 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:15.643 "is_configured": true, 00:21:15.643 "data_offset": 256, 00:21:15.643 "data_size": 7936 00:21:15.643 } 00:21:15.643 ] 00:21:15.643 }' 00:21:15.643 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.644 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.644 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.644 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.644 14:47:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:16.211 [2024-11-04 14:47:15.298046] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:16.211 [2024-11-04 14:47:15.298184] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:16.211 [2024-11-04 14:47:15.298409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.779 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.779 "name": "raid_bdev1", 00:21:16.779 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:16.779 "strip_size_kb": 0, 00:21:16.779 "state": "online", 00:21:16.779 "raid_level": "raid1", 00:21:16.779 "superblock": true, 00:21:16.779 "num_base_bdevs": 2, 00:21:16.779 "num_base_bdevs_discovered": 2, 00:21:16.779 "num_base_bdevs_operational": 2, 00:21:16.779 "base_bdevs_list": [ 00:21:16.779 { 00:21:16.779 "name": "spare", 00:21:16.779 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:16.779 "is_configured": true, 00:21:16.779 "data_offset": 256, 00:21:16.779 "data_size": 7936 00:21:16.779 }, 00:21:16.779 { 00:21:16.779 "name": "BaseBdev2", 00:21:16.779 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:16.779 "is_configured": true, 00:21:16.779 "data_offset": 256, 00:21:16.779 "data_size": 7936 00:21:16.779 } 00:21:16.779 ] 00:21:16.780 }' 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.780 "name": "raid_bdev1", 00:21:16.780 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:16.780 "strip_size_kb": 0, 00:21:16.780 "state": "online", 00:21:16.780 "raid_level": "raid1", 00:21:16.780 "superblock": true, 00:21:16.780 "num_base_bdevs": 2, 00:21:16.780 "num_base_bdevs_discovered": 2, 00:21:16.780 "num_base_bdevs_operational": 2, 00:21:16.780 "base_bdevs_list": [ 00:21:16.780 { 00:21:16.780 "name": "spare", 00:21:16.780 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:16.780 "is_configured": true, 00:21:16.780 "data_offset": 256, 00:21:16.780 "data_size": 7936 00:21:16.780 }, 00:21:16.780 { 00:21:16.780 "name": "BaseBdev2", 00:21:16.780 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:16.780 "is_configured": true, 00:21:16.780 "data_offset": 256, 00:21:16.780 "data_size": 7936 00:21:16.780 } 00:21:16.780 ] 00:21:16.780 }' 00:21:16.780 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.039 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:17.039 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.039 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:17.039 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:17.039 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:17.039 14:47:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:17.039 "name": "raid_bdev1", 00:21:17.039 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:17.039 "strip_size_kb": 0, 00:21:17.039 "state": "online", 00:21:17.039 "raid_level": "raid1", 00:21:17.039 "superblock": true, 00:21:17.039 "num_base_bdevs": 2, 00:21:17.039 "num_base_bdevs_discovered": 2, 00:21:17.039 "num_base_bdevs_operational": 2, 00:21:17.039 "base_bdevs_list": [ 00:21:17.039 { 00:21:17.039 "name": "spare", 00:21:17.039 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:17.039 "is_configured": true, 00:21:17.039 "data_offset": 256, 00:21:17.039 "data_size": 7936 00:21:17.039 }, 00:21:17.039 { 00:21:17.039 "name": "BaseBdev2", 00:21:17.039 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:17.039 "is_configured": true, 00:21:17.039 "data_offset": 256, 00:21:17.039 "data_size": 7936 00:21:17.039 } 00:21:17.039 ] 00:21:17.039 }' 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:17.039 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.606 [2024-11-04 14:47:16.517701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.606 [2024-11-04 14:47:16.517738] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:17.606 [2024-11-04 14:47:16.517902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:17.606 [2024-11-04 14:47:16.518017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:17.606 [2024-11-04 14:47:16.518036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:17.606 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:17.865 /dev/nbd0 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:17.865 1+0 records in 00:21:17.865 1+0 records out 00:21:17.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253771 s, 16.1 MB/s 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:17.865 14:47:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:18.123 /dev/nbd1 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:18.123 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:18.123 1+0 records in 00:21:18.123 1+0 records out 00:21:18.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430354 s, 9.5 MB/s 00:21:18.124 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.124 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:21:18.124 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:18.124 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:18.124 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:21:18.124 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:18.124 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:18.124 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:18.382 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:18.382 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:18.382 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:18.382 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:18.382 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:21:18.382 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:18.382 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:18.641 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:18.641 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:18.641 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:18.641 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:18.641 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.641 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:18.641 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:18.641 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:18.641 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:18.641 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:18.901 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:18.901 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.902 14:47:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:18.902 [2024-11-04 14:47:18.003628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:18.902 [2024-11-04 14:47:18.003709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.902 [2024-11-04 14:47:18.003772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:18.902 [2024-11-04 14:47:18.003792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.902 [2024-11-04 14:47:18.006556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.902 [2024-11-04 14:47:18.006739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:18.902 [2024-11-04 14:47:18.006832] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:18.902 [2024-11-04 14:47:18.006920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:18.902 [2024-11-04 14:47:18.007185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:18.902 spare 00:21:18.902 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.902 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:18.902 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.902 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.161 [2024-11-04 14:47:18.107326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:19.161 [2024-11-04 14:47:18.107412] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:19.161 [2024-11-04 14:47:18.107596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:19.161 [2024-11-04 14:47:18.107787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:19.161 [2024-11-04 14:47:18.107800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:19.161 [2024-11-04 14:47:18.108015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.161 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.162 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.162 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.162 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.162 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.162 "name": "raid_bdev1", 00:21:19.162 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:19.162 "strip_size_kb": 0, 00:21:19.162 "state": "online", 00:21:19.162 "raid_level": "raid1", 00:21:19.162 "superblock": true, 00:21:19.162 "num_base_bdevs": 2, 00:21:19.162 "num_base_bdevs_discovered": 2, 00:21:19.162 "num_base_bdevs_operational": 2, 00:21:19.162 "base_bdevs_list": [ 00:21:19.162 { 00:21:19.162 "name": "spare", 00:21:19.162 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:19.162 "is_configured": true, 00:21:19.162 "data_offset": 256, 00:21:19.162 "data_size": 7936 00:21:19.162 }, 00:21:19.162 { 00:21:19.162 "name": "BaseBdev2", 00:21:19.162 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:19.162 "is_configured": true, 00:21:19.162 "data_offset": 256, 00:21:19.162 "data_size": 7936 00:21:19.162 } 00:21:19.162 ] 00:21:19.162 }' 00:21:19.162 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.162 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.730 "name": "raid_bdev1", 00:21:19.730 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:19.730 "strip_size_kb": 0, 00:21:19.730 "state": "online", 00:21:19.730 "raid_level": "raid1", 00:21:19.730 "superblock": true, 00:21:19.730 "num_base_bdevs": 2, 00:21:19.730 "num_base_bdevs_discovered": 2, 00:21:19.730 "num_base_bdevs_operational": 2, 00:21:19.730 "base_bdevs_list": [ 00:21:19.730 { 00:21:19.730 "name": "spare", 00:21:19.730 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:19.730 "is_configured": true, 00:21:19.730 "data_offset": 256, 00:21:19.730 "data_size": 7936 00:21:19.730 }, 00:21:19.730 { 00:21:19.730 "name": "BaseBdev2", 00:21:19.730 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:19.730 "is_configured": true, 00:21:19.730 "data_offset": 256, 00:21:19.730 "data_size": 7936 00:21:19.730 } 00:21:19.730 ] 00:21:19.730 }' 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.730 [2024-11-04 14:47:18.836242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.730 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:19.989 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.989 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:19.989 "name": "raid_bdev1", 00:21:19.989 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:19.989 "strip_size_kb": 0, 00:21:19.989 "state": "online", 00:21:19.989 "raid_level": "raid1", 00:21:19.989 "superblock": true, 00:21:19.989 "num_base_bdevs": 2, 00:21:19.989 "num_base_bdevs_discovered": 1, 00:21:19.989 "num_base_bdevs_operational": 1, 00:21:19.989 "base_bdevs_list": [ 00:21:19.989 { 00:21:19.989 "name": null, 00:21:19.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.989 "is_configured": false, 00:21:19.989 "data_offset": 0, 00:21:19.989 "data_size": 7936 00:21:19.989 }, 00:21:19.989 { 00:21:19.989 "name": "BaseBdev2", 00:21:19.989 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:19.989 "is_configured": true, 00:21:19.989 "data_offset": 256, 00:21:19.989 "data_size": 7936 00:21:19.989 } 00:21:19.989 ] 00:21:19.989 }' 00:21:19.989 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:19.989 14:47:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:20.247 14:47:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:20.247 14:47:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:20.247 14:47:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:20.506 [2024-11-04 14:47:19.372499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.506 [2024-11-04 14:47:19.372884] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:20.506 [2024-11-04 14:47:19.373072] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:20.506 [2024-11-04 14:47:19.373129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:20.506 [2024-11-04 14:47:19.386121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:20.506 14:47:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:20.506 14:47:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:20.506 [2024-11-04 14:47:19.388706] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:21.438 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.438 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.438 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.438 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.438 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.438 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.438 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.439 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.439 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.439 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.439 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.439 "name": "raid_bdev1", 00:21:21.439 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:21.439 "strip_size_kb": 0, 00:21:21.439 "state": "online", 00:21:21.439 "raid_level": "raid1", 00:21:21.439 "superblock": true, 00:21:21.439 "num_base_bdevs": 2, 00:21:21.439 "num_base_bdevs_discovered": 2, 00:21:21.439 "num_base_bdevs_operational": 2, 00:21:21.439 "process": { 00:21:21.439 "type": "rebuild", 00:21:21.439 "target": "spare", 00:21:21.439 "progress": { 00:21:21.439 "blocks": 2560, 00:21:21.439 "percent": 32 00:21:21.439 } 00:21:21.439 }, 00:21:21.439 "base_bdevs_list": [ 00:21:21.439 { 00:21:21.439 "name": "spare", 00:21:21.439 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:21.439 "is_configured": true, 00:21:21.439 "data_offset": 256, 00:21:21.439 "data_size": 7936 00:21:21.439 }, 00:21:21.439 { 00:21:21.439 "name": "BaseBdev2", 00:21:21.439 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:21.439 "is_configured": true, 00:21:21.439 "data_offset": 256, 00:21:21.439 "data_size": 7936 00:21:21.439 } 00:21:21.439 ] 00:21:21.439 }' 00:21:21.439 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.439 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.439 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.697 [2024-11-04 14:47:20.582670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.697 [2024-11-04 14:47:20.597979] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:21.697 [2024-11-04 14:47:20.598224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.697 [2024-11-04 14:47:20.598354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.697 [2024-11-04 14:47:20.598394] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.697 "name": "raid_bdev1", 00:21:21.697 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:21.697 "strip_size_kb": 0, 00:21:21.697 "state": "online", 00:21:21.697 "raid_level": "raid1", 00:21:21.697 "superblock": true, 00:21:21.697 "num_base_bdevs": 2, 00:21:21.697 "num_base_bdevs_discovered": 1, 00:21:21.697 "num_base_bdevs_operational": 1, 00:21:21.697 "base_bdevs_list": [ 00:21:21.697 { 00:21:21.697 "name": null, 00:21:21.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.697 "is_configured": false, 00:21:21.697 "data_offset": 0, 00:21:21.697 "data_size": 7936 00:21:21.697 }, 00:21:21.697 { 00:21:21.697 "name": "BaseBdev2", 00:21:21.697 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:21.697 "is_configured": true, 00:21:21.697 "data_offset": 256, 00:21:21.697 "data_size": 7936 00:21:21.697 } 00:21:21.697 ] 00:21:21.697 }' 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.697 14:47:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.263 14:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:22.263 14:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.263 14:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:22.263 [2024-11-04 14:47:21.153304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:22.263 [2024-11-04 14:47:21.153514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.263 [2024-11-04 14:47:21.153557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:22.263 [2024-11-04 14:47:21.153576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.263 [2024-11-04 14:47:21.153866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.263 [2024-11-04 14:47:21.153895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:22.263 [2024-11-04 14:47:21.154000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:22.263 [2024-11-04 14:47:21.154024] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:22.263 [2024-11-04 14:47:21.154038] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:22.263 [2024-11-04 14:47:21.154068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.263 [2024-11-04 14:47:21.166653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:21:22.263 spare 00:21:22.263 14:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.263 14:47:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:22.263 [2024-11-04 14:47:21.169132] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.196 "name": "raid_bdev1", 00:21:23.196 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:23.196 "strip_size_kb": 0, 00:21:23.196 "state": "online", 00:21:23.196 "raid_level": "raid1", 00:21:23.196 "superblock": true, 00:21:23.196 "num_base_bdevs": 2, 00:21:23.196 "num_base_bdevs_discovered": 2, 00:21:23.196 "num_base_bdevs_operational": 2, 00:21:23.196 "process": { 00:21:23.196 "type": "rebuild", 00:21:23.196 "target": "spare", 00:21:23.196 "progress": { 00:21:23.196 "blocks": 2560, 00:21:23.196 "percent": 32 00:21:23.196 } 00:21:23.196 }, 00:21:23.196 "base_bdevs_list": [ 00:21:23.196 { 00:21:23.196 "name": "spare", 00:21:23.196 "uuid": "1fc1eb98-8b8f-581f-a283-da9a6cfe6a54", 00:21:23.196 "is_configured": true, 00:21:23.196 "data_offset": 256, 00:21:23.196 "data_size": 7936 00:21:23.196 }, 00:21:23.196 { 00:21:23.196 "name": "BaseBdev2", 00:21:23.196 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:23.196 "is_configured": true, 00:21:23.196 "data_offset": 256, 00:21:23.196 "data_size": 7936 00:21:23.196 } 00:21:23.196 ] 00:21:23.196 }' 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.196 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.454 [2024-11-04 14:47:22.322953] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.454 [2024-11-04 14:47:22.378313] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:23.454 [2024-11-04 14:47:22.378396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.454 [2024-11-04 14:47:22.378423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:23.454 [2024-11-04 14:47:22.378435] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.454 "name": "raid_bdev1", 00:21:23.454 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:23.454 "strip_size_kb": 0, 00:21:23.454 "state": "online", 00:21:23.454 "raid_level": "raid1", 00:21:23.454 "superblock": true, 00:21:23.454 "num_base_bdevs": 2, 00:21:23.454 "num_base_bdevs_discovered": 1, 00:21:23.454 "num_base_bdevs_operational": 1, 00:21:23.454 "base_bdevs_list": [ 00:21:23.454 { 00:21:23.454 "name": null, 00:21:23.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.454 "is_configured": false, 00:21:23.454 "data_offset": 0, 00:21:23.454 "data_size": 7936 00:21:23.454 }, 00:21:23.454 { 00:21:23.454 "name": "BaseBdev2", 00:21:23.454 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:23.454 "is_configured": true, 00:21:23.454 "data_offset": 256, 00:21:23.454 "data_size": 7936 00:21:23.454 } 00:21:23.454 ] 00:21:23.454 }' 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.454 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:24.064 "name": "raid_bdev1", 00:21:24.064 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:24.064 "strip_size_kb": 0, 00:21:24.064 "state": "online", 00:21:24.064 "raid_level": "raid1", 00:21:24.064 "superblock": true, 00:21:24.064 "num_base_bdevs": 2, 00:21:24.064 "num_base_bdevs_discovered": 1, 00:21:24.064 "num_base_bdevs_operational": 1, 00:21:24.064 "base_bdevs_list": [ 00:21:24.064 { 00:21:24.064 "name": null, 00:21:24.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.064 "is_configured": false, 00:21:24.064 "data_offset": 0, 00:21:24.064 "data_size": 7936 00:21:24.064 }, 00:21:24.064 { 00:21:24.064 "name": "BaseBdev2", 00:21:24.064 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:24.064 "is_configured": true, 00:21:24.064 "data_offset": 256, 00:21:24.064 "data_size": 7936 00:21:24.064 } 00:21:24.064 ] 00:21:24.064 }' 00:21:24.064 14:47:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:24.064 [2024-11-04 14:47:23.089434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:24.064 [2024-11-04 14:47:23.089509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.064 [2024-11-04 14:47:23.089547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:24.064 [2024-11-04 14:47:23.089561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.064 [2024-11-04 14:47:23.089835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.064 [2024-11-04 14:47:23.089856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:24.064 [2024-11-04 14:47:23.089917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:24.064 [2024-11-04 14:47:23.089951] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:24.064 [2024-11-04 14:47:23.089965] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:24.064 [2024-11-04 14:47:23.089994] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:24.064 BaseBdev1 00:21:24.064 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.065 14:47:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:25.000 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:25.000 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.000 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.000 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:25.000 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:25.000 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:25.000 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.000 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.000 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.001 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.001 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.001 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.001 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.001 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.260 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.260 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.260 "name": "raid_bdev1", 00:21:25.260 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:25.260 "strip_size_kb": 0, 00:21:25.260 "state": "online", 00:21:25.260 "raid_level": "raid1", 00:21:25.260 "superblock": true, 00:21:25.260 "num_base_bdevs": 2, 00:21:25.260 "num_base_bdevs_discovered": 1, 00:21:25.260 "num_base_bdevs_operational": 1, 00:21:25.260 "base_bdevs_list": [ 00:21:25.260 { 00:21:25.260 "name": null, 00:21:25.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.260 "is_configured": false, 00:21:25.260 "data_offset": 0, 00:21:25.260 "data_size": 7936 00:21:25.260 }, 00:21:25.260 { 00:21:25.260 "name": "BaseBdev2", 00:21:25.260 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:25.260 "is_configured": true, 00:21:25.260 "data_offset": 256, 00:21:25.260 "data_size": 7936 00:21:25.260 } 00:21:25.260 ] 00:21:25.260 }' 00:21:25.260 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.260 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.520 "name": "raid_bdev1", 00:21:25.520 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:25.520 "strip_size_kb": 0, 00:21:25.520 "state": "online", 00:21:25.520 "raid_level": "raid1", 00:21:25.520 "superblock": true, 00:21:25.520 "num_base_bdevs": 2, 00:21:25.520 "num_base_bdevs_discovered": 1, 00:21:25.520 "num_base_bdevs_operational": 1, 00:21:25.520 "base_bdevs_list": [ 00:21:25.520 { 00:21:25.520 "name": null, 00:21:25.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.520 "is_configured": false, 00:21:25.520 "data_offset": 0, 00:21:25.520 "data_size": 7936 00:21:25.520 }, 00:21:25.520 { 00:21:25.520 "name": "BaseBdev2", 00:21:25.520 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:25.520 "is_configured": true, 00:21:25.520 "data_offset": 256, 00:21:25.520 "data_size": 7936 00:21:25.520 } 00:21:25.520 ] 00:21:25.520 }' 00:21:25.520 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:25.779 [2024-11-04 14:47:24.750092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.779 [2024-11-04 14:47:24.750447] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:25.779 [2024-11-04 14:47:24.750494] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:25.779 request: 00:21:25.779 { 00:21:25.779 "base_bdev": "BaseBdev1", 00:21:25.779 "raid_bdev": "raid_bdev1", 00:21:25.779 "method": "bdev_raid_add_base_bdev", 00:21:25.779 "req_id": 1 00:21:25.779 } 00:21:25.779 Got JSON-RPC error response 00:21:25.779 response: 00:21:25.779 { 00:21:25.779 "code": -22, 00:21:25.779 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:25.779 } 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:25.779 14:47:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.765 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:26.765 "name": "raid_bdev1", 00:21:26.765 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:26.765 "strip_size_kb": 0, 00:21:26.765 "state": "online", 00:21:26.765 "raid_level": "raid1", 00:21:26.765 "superblock": true, 00:21:26.765 "num_base_bdevs": 2, 00:21:26.765 "num_base_bdevs_discovered": 1, 00:21:26.766 "num_base_bdevs_operational": 1, 00:21:26.766 "base_bdevs_list": [ 00:21:26.766 { 00:21:26.766 "name": null, 00:21:26.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.766 "is_configured": false, 00:21:26.766 "data_offset": 0, 00:21:26.766 "data_size": 7936 00:21:26.766 }, 00:21:26.766 { 00:21:26.766 "name": "BaseBdev2", 00:21:26.766 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:26.766 "is_configured": true, 00:21:26.766 "data_offset": 256, 00:21:26.766 "data_size": 7936 00:21:26.766 } 00:21:26.766 ] 00:21:26.766 }' 00:21:26.766 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:26.766 14:47:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.333 "name": "raid_bdev1", 00:21:27.333 "uuid": "3ece13f6-981c-4ca9-ace7-5ba4221b1a30", 00:21:27.333 "strip_size_kb": 0, 00:21:27.333 "state": "online", 00:21:27.333 "raid_level": "raid1", 00:21:27.333 "superblock": true, 00:21:27.333 "num_base_bdevs": 2, 00:21:27.333 "num_base_bdevs_discovered": 1, 00:21:27.333 "num_base_bdevs_operational": 1, 00:21:27.333 "base_bdevs_list": [ 00:21:27.333 { 00:21:27.333 "name": null, 00:21:27.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.333 "is_configured": false, 00:21:27.333 "data_offset": 0, 00:21:27.333 "data_size": 7936 00:21:27.333 }, 00:21:27.333 { 00:21:27.333 "name": "BaseBdev2", 00:21:27.333 "uuid": "a2c76cd6-7833-57e9-9c9f-939b1f39c96a", 00:21:27.333 "is_configured": true, 00:21:27.333 "data_offset": 256, 00:21:27.333 "data_size": 7936 00:21:27.333 } 00:21:27.333 ] 00:21:27.333 }' 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88206 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88206 ']' 00:21:27.333 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 88206 00:21:27.591 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:21:27.591 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:27.591 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88206 00:21:27.591 killing process with pid 88206 00:21:27.591 Received shutdown signal, test time was about 60.000000 seconds 00:21:27.591 00:21:27.591 Latency(us) 00:21:27.591 [2024-11-04T14:47:26.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.591 [2024-11-04T14:47:26.714Z] =================================================================================================================== 00:21:27.591 [2024-11-04T14:47:26.714Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:27.591 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:27.591 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:27.591 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88206' 00:21:27.591 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 88206 00:21:27.591 [2024-11-04 14:47:26.487467] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:27.591 14:47:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 88206 00:21:27.591 [2024-11-04 14:47:26.487635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.591 [2024-11-04 14:47:26.487697] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:27.591 [2024-11-04 14:47:26.487715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:27.850 [2024-11-04 14:47:26.762324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:28.785 14:47:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:21:28.785 00:21:28.785 real 0m21.600s 00:21:28.785 user 0m29.359s 00:21:28.785 sys 0m2.477s 00:21:28.785 14:47:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:28.785 ************************************ 00:21:28.785 END TEST raid_rebuild_test_sb_md_separate 00:21:28.785 14:47:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:21:28.785 ************************************ 00:21:28.785 14:47:27 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:21:28.785 14:47:27 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:21:28.785 14:47:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:28.785 14:47:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:28.785 14:47:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.785 ************************************ 00:21:28.785 START TEST raid_state_function_test_sb_md_interleaved 00:21:28.785 ************************************ 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88908 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:28.785 Process raid pid: 88908 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88908' 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88908 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88908 ']' 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:28.785 14:47:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.045 [2024-11-04 14:47:27.911100] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:21:29.045 [2024-11-04 14:47:27.911264] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.045 [2024-11-04 14:47:28.089356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.303 [2024-11-04 14:47:28.222499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.562 [2024-11-04 14:47:28.430951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:29.562 [2024-11-04 14:47:28.431015] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.821 [2024-11-04 14:47:28.902157] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:29.821 [2024-11-04 14:47:28.902228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:29.821 [2024-11-04 14:47:28.902254] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:29.821 [2024-11-04 14:47:28.902279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:29.821 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.080 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.080 "name": "Existed_Raid", 00:21:30.080 "uuid": "29144e44-73e5-4bb1-8ae5-0e31a0e43bd1", 00:21:30.080 "strip_size_kb": 0, 00:21:30.080 "state": "configuring", 00:21:30.080 "raid_level": "raid1", 00:21:30.080 "superblock": true, 00:21:30.080 "num_base_bdevs": 2, 00:21:30.080 "num_base_bdevs_discovered": 0, 00:21:30.080 "num_base_bdevs_operational": 2, 00:21:30.080 "base_bdevs_list": [ 00:21:30.080 { 00:21:30.080 "name": "BaseBdev1", 00:21:30.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.080 "is_configured": false, 00:21:30.080 "data_offset": 0, 00:21:30.080 "data_size": 0 00:21:30.080 }, 00:21:30.080 { 00:21:30.080 "name": "BaseBdev2", 00:21:30.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.080 "is_configured": false, 00:21:30.080 "data_offset": 0, 00:21:30.080 "data_size": 0 00:21:30.080 } 00:21:30.080 ] 00:21:30.080 }' 00:21:30.080 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.080 14:47:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.657 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:30.657 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.657 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.657 [2024-11-04 14:47:29.466380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:30.657 [2024-11-04 14:47:29.466621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:30.657 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.657 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:30.657 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.657 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.657 [2024-11-04 14:47:29.474345] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:30.657 [2024-11-04 14:47:29.474412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:30.657 [2024-11-04 14:47:29.474438] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:30.657 [2024-11-04 14:47:29.474468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:30.657 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.657 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:21:30.657 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.658 [2024-11-04 14:47:29.523562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:30.658 BaseBdev1 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.658 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.658 [ 00:21:30.658 { 00:21:30.658 "name": "BaseBdev1", 00:21:30.658 "aliases": [ 00:21:30.658 "465c34aa-4249-4431-bf79-f27cc5b1e0c1" 00:21:30.658 ], 00:21:30.658 "product_name": "Malloc disk", 00:21:30.658 "block_size": 4128, 00:21:30.658 "num_blocks": 8192, 00:21:30.658 "uuid": "465c34aa-4249-4431-bf79-f27cc5b1e0c1", 00:21:30.658 "md_size": 32, 00:21:30.658 "md_interleave": true, 00:21:30.658 "dif_type": 0, 00:21:30.658 "assigned_rate_limits": { 00:21:30.658 "rw_ios_per_sec": 0, 00:21:30.658 "rw_mbytes_per_sec": 0, 00:21:30.658 "r_mbytes_per_sec": 0, 00:21:30.658 "w_mbytes_per_sec": 0 00:21:30.658 }, 00:21:30.658 "claimed": true, 00:21:30.658 "claim_type": "exclusive_write", 00:21:30.658 "zoned": false, 00:21:30.658 "supported_io_types": { 00:21:30.658 "read": true, 00:21:30.658 "write": true, 00:21:30.659 "unmap": true, 00:21:30.659 "flush": true, 00:21:30.659 "reset": true, 00:21:30.659 "nvme_admin": false, 00:21:30.659 "nvme_io": false, 00:21:30.659 "nvme_io_md": false, 00:21:30.659 "write_zeroes": true, 00:21:30.659 "zcopy": true, 00:21:30.659 "get_zone_info": false, 00:21:30.659 "zone_management": false, 00:21:30.659 "zone_append": false, 00:21:30.659 "compare": false, 00:21:30.659 "compare_and_write": false, 00:21:30.659 "abort": true, 00:21:30.659 "seek_hole": false, 00:21:30.659 "seek_data": false, 00:21:30.659 "copy": true, 00:21:30.659 "nvme_iov_md": false 00:21:30.659 }, 00:21:30.659 "memory_domains": [ 00:21:30.659 { 00:21:30.659 "dma_device_id": "system", 00:21:30.659 "dma_device_type": 1 00:21:30.659 }, 00:21:30.659 { 00:21:30.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.659 "dma_device_type": 2 00:21:30.659 } 00:21:30.659 ], 00:21:30.659 "driver_specific": {} 00:21:30.659 } 00:21:30.659 ] 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.659 "name": "Existed_Raid", 00:21:30.659 "uuid": "84f5df55-4e8e-42a9-87a8-b1bd1b36407d", 00:21:30.659 "strip_size_kb": 0, 00:21:30.659 "state": "configuring", 00:21:30.659 "raid_level": "raid1", 00:21:30.659 "superblock": true, 00:21:30.659 "num_base_bdevs": 2, 00:21:30.659 "num_base_bdevs_discovered": 1, 00:21:30.659 "num_base_bdevs_operational": 2, 00:21:30.659 "base_bdevs_list": [ 00:21:30.659 { 00:21:30.659 "name": "BaseBdev1", 00:21:30.659 "uuid": "465c34aa-4249-4431-bf79-f27cc5b1e0c1", 00:21:30.659 "is_configured": true, 00:21:30.659 "data_offset": 256, 00:21:30.659 "data_size": 7936 00:21:30.659 }, 00:21:30.659 { 00:21:30.659 "name": "BaseBdev2", 00:21:30.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.659 "is_configured": false, 00:21:30.659 "data_offset": 0, 00:21:30.659 "data_size": 0 00:21:30.659 } 00:21:30.659 ] 00:21:30.659 }' 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.659 14:47:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.225 [2024-11-04 14:47:30.095841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:31.225 [2024-11-04 14:47:30.096109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.225 [2024-11-04 14:47:30.103886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:31.225 [2024-11-04 14:47:30.106713] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:31.225 [2024-11-04 14:47:30.106772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.225 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.226 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.226 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.226 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.226 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.226 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.226 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.226 "name": "Existed_Raid", 00:21:31.226 "uuid": "82d92a35-c209-4786-8b25-881a7f019af7", 00:21:31.226 "strip_size_kb": 0, 00:21:31.226 "state": "configuring", 00:21:31.226 "raid_level": "raid1", 00:21:31.226 "superblock": true, 00:21:31.226 "num_base_bdevs": 2, 00:21:31.226 "num_base_bdevs_discovered": 1, 00:21:31.226 "num_base_bdevs_operational": 2, 00:21:31.226 "base_bdevs_list": [ 00:21:31.226 { 00:21:31.226 "name": "BaseBdev1", 00:21:31.226 "uuid": "465c34aa-4249-4431-bf79-f27cc5b1e0c1", 00:21:31.226 "is_configured": true, 00:21:31.226 "data_offset": 256, 00:21:31.226 "data_size": 7936 00:21:31.226 }, 00:21:31.226 { 00:21:31.226 "name": "BaseBdev2", 00:21:31.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.226 "is_configured": false, 00:21:31.226 "data_offset": 0, 00:21:31.226 "data_size": 0 00:21:31.226 } 00:21:31.226 ] 00:21:31.226 }' 00:21:31.226 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.226 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.791 [2024-11-04 14:47:30.666820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:31.791 [2024-11-04 14:47:30.667126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:31.791 [2024-11-04 14:47:30.667147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:31.791 [2024-11-04 14:47:30.667257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:31.791 [2024-11-04 14:47:30.667376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:31.791 [2024-11-04 14:47:30.667415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:31.791 BaseBdev2 00:21:31.791 [2024-11-04 14:47:30.667540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:31.791 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.792 [ 00:21:31.792 { 00:21:31.792 "name": "BaseBdev2", 00:21:31.792 "aliases": [ 00:21:31.792 "b5acd4d4-5b28-43c8-9258-1b99f8835ea8" 00:21:31.792 ], 00:21:31.792 "product_name": "Malloc disk", 00:21:31.792 "block_size": 4128, 00:21:31.792 "num_blocks": 8192, 00:21:31.792 "uuid": "b5acd4d4-5b28-43c8-9258-1b99f8835ea8", 00:21:31.792 "md_size": 32, 00:21:31.792 "md_interleave": true, 00:21:31.792 "dif_type": 0, 00:21:31.792 "assigned_rate_limits": { 00:21:31.792 "rw_ios_per_sec": 0, 00:21:31.792 "rw_mbytes_per_sec": 0, 00:21:31.792 "r_mbytes_per_sec": 0, 00:21:31.792 "w_mbytes_per_sec": 0 00:21:31.792 }, 00:21:31.792 "claimed": true, 00:21:31.792 "claim_type": "exclusive_write", 00:21:31.792 "zoned": false, 00:21:31.792 "supported_io_types": { 00:21:31.792 "read": true, 00:21:31.792 "write": true, 00:21:31.792 "unmap": true, 00:21:31.792 "flush": true, 00:21:31.792 "reset": true, 00:21:31.792 "nvme_admin": false, 00:21:31.792 "nvme_io": false, 00:21:31.792 "nvme_io_md": false, 00:21:31.792 "write_zeroes": true, 00:21:31.792 "zcopy": true, 00:21:31.792 "get_zone_info": false, 00:21:31.792 "zone_management": false, 00:21:31.792 "zone_append": false, 00:21:31.792 "compare": false, 00:21:31.792 "compare_and_write": false, 00:21:31.792 "abort": true, 00:21:31.792 "seek_hole": false, 00:21:31.792 "seek_data": false, 00:21:31.792 "copy": true, 00:21:31.792 "nvme_iov_md": false 00:21:31.792 }, 00:21:31.792 "memory_domains": [ 00:21:31.792 { 00:21:31.792 "dma_device_id": "system", 00:21:31.792 "dma_device_type": 1 00:21:31.792 }, 00:21:31.792 { 00:21:31.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.792 "dma_device_type": 2 00:21:31.792 } 00:21:31.792 ], 00:21:31.792 "driver_specific": {} 00:21:31.792 } 00:21:31.792 ] 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:31.792 "name": "Existed_Raid", 00:21:31.792 "uuid": "82d92a35-c209-4786-8b25-881a7f019af7", 00:21:31.792 "strip_size_kb": 0, 00:21:31.792 "state": "online", 00:21:31.792 "raid_level": "raid1", 00:21:31.792 "superblock": true, 00:21:31.792 "num_base_bdevs": 2, 00:21:31.792 "num_base_bdevs_discovered": 2, 00:21:31.792 "num_base_bdevs_operational": 2, 00:21:31.792 "base_bdevs_list": [ 00:21:31.792 { 00:21:31.792 "name": "BaseBdev1", 00:21:31.792 "uuid": "465c34aa-4249-4431-bf79-f27cc5b1e0c1", 00:21:31.792 "is_configured": true, 00:21:31.792 "data_offset": 256, 00:21:31.792 "data_size": 7936 00:21:31.792 }, 00:21:31.792 { 00:21:31.792 "name": "BaseBdev2", 00:21:31.792 "uuid": "b5acd4d4-5b28-43c8-9258-1b99f8835ea8", 00:21:31.792 "is_configured": true, 00:21:31.792 "data_offset": 256, 00:21:31.792 "data_size": 7936 00:21:31.792 } 00:21:31.792 ] 00:21:31.792 }' 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:31.792 14:47:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.359 [2024-11-04 14:47:31.203484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.359 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:32.359 "name": "Existed_Raid", 00:21:32.359 "aliases": [ 00:21:32.359 "82d92a35-c209-4786-8b25-881a7f019af7" 00:21:32.359 ], 00:21:32.359 "product_name": "Raid Volume", 00:21:32.359 "block_size": 4128, 00:21:32.359 "num_blocks": 7936, 00:21:32.359 "uuid": "82d92a35-c209-4786-8b25-881a7f019af7", 00:21:32.359 "md_size": 32, 00:21:32.359 "md_interleave": true, 00:21:32.359 "dif_type": 0, 00:21:32.359 "assigned_rate_limits": { 00:21:32.359 "rw_ios_per_sec": 0, 00:21:32.359 "rw_mbytes_per_sec": 0, 00:21:32.359 "r_mbytes_per_sec": 0, 00:21:32.359 "w_mbytes_per_sec": 0 00:21:32.359 }, 00:21:32.359 "claimed": false, 00:21:32.359 "zoned": false, 00:21:32.359 "supported_io_types": { 00:21:32.359 "read": true, 00:21:32.359 "write": true, 00:21:32.359 "unmap": false, 00:21:32.359 "flush": false, 00:21:32.359 "reset": true, 00:21:32.359 "nvme_admin": false, 00:21:32.359 "nvme_io": false, 00:21:32.359 "nvme_io_md": false, 00:21:32.359 "write_zeroes": true, 00:21:32.359 "zcopy": false, 00:21:32.359 "get_zone_info": false, 00:21:32.359 "zone_management": false, 00:21:32.359 "zone_append": false, 00:21:32.359 "compare": false, 00:21:32.359 "compare_and_write": false, 00:21:32.359 "abort": false, 00:21:32.359 "seek_hole": false, 00:21:32.359 "seek_data": false, 00:21:32.359 "copy": false, 00:21:32.359 "nvme_iov_md": false 00:21:32.359 }, 00:21:32.359 "memory_domains": [ 00:21:32.359 { 00:21:32.359 "dma_device_id": "system", 00:21:32.359 "dma_device_type": 1 00:21:32.359 }, 00:21:32.359 { 00:21:32.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.359 "dma_device_type": 2 00:21:32.359 }, 00:21:32.359 { 00:21:32.359 "dma_device_id": "system", 00:21:32.359 "dma_device_type": 1 00:21:32.359 }, 00:21:32.359 { 00:21:32.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.359 "dma_device_type": 2 00:21:32.359 } 00:21:32.359 ], 00:21:32.359 "driver_specific": { 00:21:32.359 "raid": { 00:21:32.359 "uuid": "82d92a35-c209-4786-8b25-881a7f019af7", 00:21:32.359 "strip_size_kb": 0, 00:21:32.359 "state": "online", 00:21:32.359 "raid_level": "raid1", 00:21:32.359 "superblock": true, 00:21:32.359 "num_base_bdevs": 2, 00:21:32.359 "num_base_bdevs_discovered": 2, 00:21:32.359 "num_base_bdevs_operational": 2, 00:21:32.359 "base_bdevs_list": [ 00:21:32.359 { 00:21:32.359 "name": "BaseBdev1", 00:21:32.359 "uuid": "465c34aa-4249-4431-bf79-f27cc5b1e0c1", 00:21:32.359 "is_configured": true, 00:21:32.359 "data_offset": 256, 00:21:32.359 "data_size": 7936 00:21:32.359 }, 00:21:32.359 { 00:21:32.359 "name": "BaseBdev2", 00:21:32.359 "uuid": "b5acd4d4-5b28-43c8-9258-1b99f8835ea8", 00:21:32.359 "is_configured": true, 00:21:32.359 "data_offset": 256, 00:21:32.359 "data_size": 7936 00:21:32.359 } 00:21:32.359 ] 00:21:32.359 } 00:21:32.359 } 00:21:32.359 }' 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:32.360 BaseBdev2' 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.360 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.360 [2024-11-04 14:47:31.459249] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.618 "name": "Existed_Raid", 00:21:32.618 "uuid": "82d92a35-c209-4786-8b25-881a7f019af7", 00:21:32.618 "strip_size_kb": 0, 00:21:32.618 "state": "online", 00:21:32.618 "raid_level": "raid1", 00:21:32.618 "superblock": true, 00:21:32.618 "num_base_bdevs": 2, 00:21:32.618 "num_base_bdevs_discovered": 1, 00:21:32.618 "num_base_bdevs_operational": 1, 00:21:32.618 "base_bdevs_list": [ 00:21:32.618 { 00:21:32.618 "name": null, 00:21:32.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.618 "is_configured": false, 00:21:32.618 "data_offset": 0, 00:21:32.618 "data_size": 7936 00:21:32.618 }, 00:21:32.618 { 00:21:32.618 "name": "BaseBdev2", 00:21:32.618 "uuid": "b5acd4d4-5b28-43c8-9258-1b99f8835ea8", 00:21:32.618 "is_configured": true, 00:21:32.618 "data_offset": 256, 00:21:32.618 "data_size": 7936 00:21:32.618 } 00:21:32.618 ] 00:21:32.618 }' 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.618 14:47:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.186 [2024-11-04 14:47:32.121247] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:33.186 [2024-11-04 14:47:32.121379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:33.186 [2024-11-04 14:47:32.208280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:33.186 [2024-11-04 14:47:32.208497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:33.186 [2024-11-04 14:47:32.208533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88908 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88908 ']' 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88908 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88908 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:33.186 killing process with pid 88908 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88908' 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88908 00:21:33.186 [2024-11-04 14:47:32.296592] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:33.186 14:47:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88908 00:21:33.445 [2024-11-04 14:47:32.311229] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:34.382 14:47:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:21:34.382 00:21:34.382 real 0m5.561s 00:21:34.382 user 0m8.403s 00:21:34.382 sys 0m0.827s 00:21:34.382 14:47:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:34.382 ************************************ 00:21:34.382 END TEST raid_state_function_test_sb_md_interleaved 00:21:34.382 ************************************ 00:21:34.382 14:47:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.382 14:47:33 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:21:34.382 14:47:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:21:34.382 14:47:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:34.382 14:47:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.382 ************************************ 00:21:34.382 START TEST raid_superblock_test_md_interleaved 00:21:34.382 ************************************ 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89165 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89165 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89165 ']' 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:34.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:34.382 14:47:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:34.641 [2024-11-04 14:47:33.509215] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:21:34.641 [2024-11-04 14:47:33.509384] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89165 ] 00:21:34.641 [2024-11-04 14:47:33.687554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.900 [2024-11-04 14:47:33.821093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.158 [2024-11-04 14:47:34.028743] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.158 [2024-11-04 14:47:34.028834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.416 malloc1 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.416 [2024-11-04 14:47:34.514241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:35.416 [2024-11-04 14:47:34.514318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.416 [2024-11-04 14:47:34.514361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:35.416 [2024-11-04 14:47:34.514376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.416 [2024-11-04 14:47:34.517033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.416 [2024-11-04 14:47:34.517092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:35.416 pt1 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.416 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.675 malloc2 00:21:35.675 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.675 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:35.675 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.675 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.675 [2024-11-04 14:47:34.565837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:35.675 [2024-11-04 14:47:34.565921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.675 [2024-11-04 14:47:34.565969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:35.675 [2024-11-04 14:47:34.565984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.675 [2024-11-04 14:47:34.568629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.675 [2024-11-04 14:47:34.568687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:35.675 pt2 00:21:35.675 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.675 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:35.675 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:35.675 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:35.675 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.675 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.675 [2024-11-04 14:47:34.573881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:35.676 [2024-11-04 14:47:34.576465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:35.676 [2024-11-04 14:47:34.576762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:35.676 [2024-11-04 14:47:34.576781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:35.676 [2024-11-04 14:47:34.576884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:35.676 [2024-11-04 14:47:34.577008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:35.676 [2024-11-04 14:47:34.577028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:35.676 [2024-11-04 14:47:34.577125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.676 "name": "raid_bdev1", 00:21:35.676 "uuid": "f435668d-5f6d-434e-ae70-19bada60278b", 00:21:35.676 "strip_size_kb": 0, 00:21:35.676 "state": "online", 00:21:35.676 "raid_level": "raid1", 00:21:35.676 "superblock": true, 00:21:35.676 "num_base_bdevs": 2, 00:21:35.676 "num_base_bdevs_discovered": 2, 00:21:35.676 "num_base_bdevs_operational": 2, 00:21:35.676 "base_bdevs_list": [ 00:21:35.676 { 00:21:35.676 "name": "pt1", 00:21:35.676 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:35.676 "is_configured": true, 00:21:35.676 "data_offset": 256, 00:21:35.676 "data_size": 7936 00:21:35.676 }, 00:21:35.676 { 00:21:35.676 "name": "pt2", 00:21:35.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:35.676 "is_configured": true, 00:21:35.676 "data_offset": 256, 00:21:35.676 "data_size": 7936 00:21:35.676 } 00:21:35.676 ] 00:21:35.676 }' 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.676 14:47:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:36.244 [2024-11-04 14:47:35.074492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:36.244 "name": "raid_bdev1", 00:21:36.244 "aliases": [ 00:21:36.244 "f435668d-5f6d-434e-ae70-19bada60278b" 00:21:36.244 ], 00:21:36.244 "product_name": "Raid Volume", 00:21:36.244 "block_size": 4128, 00:21:36.244 "num_blocks": 7936, 00:21:36.244 "uuid": "f435668d-5f6d-434e-ae70-19bada60278b", 00:21:36.244 "md_size": 32, 00:21:36.244 "md_interleave": true, 00:21:36.244 "dif_type": 0, 00:21:36.244 "assigned_rate_limits": { 00:21:36.244 "rw_ios_per_sec": 0, 00:21:36.244 "rw_mbytes_per_sec": 0, 00:21:36.244 "r_mbytes_per_sec": 0, 00:21:36.244 "w_mbytes_per_sec": 0 00:21:36.244 }, 00:21:36.244 "claimed": false, 00:21:36.244 "zoned": false, 00:21:36.244 "supported_io_types": { 00:21:36.244 "read": true, 00:21:36.244 "write": true, 00:21:36.244 "unmap": false, 00:21:36.244 "flush": false, 00:21:36.244 "reset": true, 00:21:36.244 "nvme_admin": false, 00:21:36.244 "nvme_io": false, 00:21:36.244 "nvme_io_md": false, 00:21:36.244 "write_zeroes": true, 00:21:36.244 "zcopy": false, 00:21:36.244 "get_zone_info": false, 00:21:36.244 "zone_management": false, 00:21:36.244 "zone_append": false, 00:21:36.244 "compare": false, 00:21:36.244 "compare_and_write": false, 00:21:36.244 "abort": false, 00:21:36.244 "seek_hole": false, 00:21:36.244 "seek_data": false, 00:21:36.244 "copy": false, 00:21:36.244 "nvme_iov_md": false 00:21:36.244 }, 00:21:36.244 "memory_domains": [ 00:21:36.244 { 00:21:36.244 "dma_device_id": "system", 00:21:36.244 "dma_device_type": 1 00:21:36.244 }, 00:21:36.244 { 00:21:36.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.244 "dma_device_type": 2 00:21:36.244 }, 00:21:36.244 { 00:21:36.244 "dma_device_id": "system", 00:21:36.244 "dma_device_type": 1 00:21:36.244 }, 00:21:36.244 { 00:21:36.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.244 "dma_device_type": 2 00:21:36.244 } 00:21:36.244 ], 00:21:36.244 "driver_specific": { 00:21:36.244 "raid": { 00:21:36.244 "uuid": "f435668d-5f6d-434e-ae70-19bada60278b", 00:21:36.244 "strip_size_kb": 0, 00:21:36.244 "state": "online", 00:21:36.244 "raid_level": "raid1", 00:21:36.244 "superblock": true, 00:21:36.244 "num_base_bdevs": 2, 00:21:36.244 "num_base_bdevs_discovered": 2, 00:21:36.244 "num_base_bdevs_operational": 2, 00:21:36.244 "base_bdevs_list": [ 00:21:36.244 { 00:21:36.244 "name": "pt1", 00:21:36.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:36.244 "is_configured": true, 00:21:36.244 "data_offset": 256, 00:21:36.244 "data_size": 7936 00:21:36.244 }, 00:21:36.244 { 00:21:36.244 "name": "pt2", 00:21:36.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:36.244 "is_configured": true, 00:21:36.244 "data_offset": 256, 00:21:36.244 "data_size": 7936 00:21:36.244 } 00:21:36.244 ] 00:21:36.244 } 00:21:36.244 } 00:21:36.244 }' 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:36.244 pt2' 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.244 [2024-11-04 14:47:35.334538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:36.244 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f435668d-5f6d-434e-ae70-19bada60278b 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f435668d-5f6d-434e-ae70-19bada60278b ']' 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.537 [2024-11-04 14:47:35.386127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.537 [2024-11-04 14:47:35.386157] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:36.537 [2024-11-04 14:47:35.386258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.537 [2024-11-04 14:47:35.386330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.537 [2024-11-04 14:47:35.386350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.537 [2024-11-04 14:47:35.526243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:36.537 [2024-11-04 14:47:35.528929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:36.537 [2024-11-04 14:47:35.529078] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:36.537 [2024-11-04 14:47:35.529155] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:36.537 [2024-11-04 14:47:35.529181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:36.537 [2024-11-04 14:47:35.529197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:36.537 request: 00:21:36.537 { 00:21:36.537 "name": "raid_bdev1", 00:21:36.537 "raid_level": "raid1", 00:21:36.537 "base_bdevs": [ 00:21:36.537 "malloc1", 00:21:36.537 "malloc2" 00:21:36.537 ], 00:21:36.537 "superblock": false, 00:21:36.537 "method": "bdev_raid_create", 00:21:36.537 "req_id": 1 00:21:36.537 } 00:21:36.537 Got JSON-RPC error response 00:21:36.537 response: 00:21:36.537 { 00:21:36.537 "code": -17, 00:21:36.537 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:36.537 } 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.537 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.537 [2024-11-04 14:47:35.590196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:36.537 [2024-11-04 14:47:35.590263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.537 [2024-11-04 14:47:35.590285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:36.537 [2024-11-04 14:47:35.590301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.538 [2024-11-04 14:47:35.592807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.538 [2024-11-04 14:47:35.592853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:36.538 [2024-11-04 14:47:35.592916] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:36.538 [2024-11-04 14:47:35.593009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:36.538 pt1 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.538 "name": "raid_bdev1", 00:21:36.538 "uuid": "f435668d-5f6d-434e-ae70-19bada60278b", 00:21:36.538 "strip_size_kb": 0, 00:21:36.538 "state": "configuring", 00:21:36.538 "raid_level": "raid1", 00:21:36.538 "superblock": true, 00:21:36.538 "num_base_bdevs": 2, 00:21:36.538 "num_base_bdevs_discovered": 1, 00:21:36.538 "num_base_bdevs_operational": 2, 00:21:36.538 "base_bdevs_list": [ 00:21:36.538 { 00:21:36.538 "name": "pt1", 00:21:36.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:36.538 "is_configured": true, 00:21:36.538 "data_offset": 256, 00:21:36.538 "data_size": 7936 00:21:36.538 }, 00:21:36.538 { 00:21:36.538 "name": null, 00:21:36.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:36.538 "is_configured": false, 00:21:36.538 "data_offset": 256, 00:21:36.538 "data_size": 7936 00:21:36.538 } 00:21:36.538 ] 00:21:36.538 }' 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.538 14:47:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.105 [2024-11-04 14:47:36.118377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:37.105 [2024-11-04 14:47:36.118549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.105 [2024-11-04 14:47:36.118578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:37.105 [2024-11-04 14:47:36.118594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.105 [2024-11-04 14:47:36.118808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.105 [2024-11-04 14:47:36.118834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:37.105 [2024-11-04 14:47:36.118897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:37.105 [2024-11-04 14:47:36.118934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:37.105 [2024-11-04 14:47:36.119066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:37.105 [2024-11-04 14:47:36.119088] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:37.105 [2024-11-04 14:47:36.119174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:37.105 [2024-11-04 14:47:36.119271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:37.105 [2024-11-04 14:47:36.119296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:37.105 [2024-11-04 14:47:36.119384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.105 pt2 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.105 "name": "raid_bdev1", 00:21:37.105 "uuid": "f435668d-5f6d-434e-ae70-19bada60278b", 00:21:37.105 "strip_size_kb": 0, 00:21:37.105 "state": "online", 00:21:37.105 "raid_level": "raid1", 00:21:37.105 "superblock": true, 00:21:37.105 "num_base_bdevs": 2, 00:21:37.105 "num_base_bdevs_discovered": 2, 00:21:37.105 "num_base_bdevs_operational": 2, 00:21:37.105 "base_bdevs_list": [ 00:21:37.105 { 00:21:37.105 "name": "pt1", 00:21:37.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:37.105 "is_configured": true, 00:21:37.105 "data_offset": 256, 00:21:37.105 "data_size": 7936 00:21:37.105 }, 00:21:37.105 { 00:21:37.105 "name": "pt2", 00:21:37.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.105 "is_configured": true, 00:21:37.105 "data_offset": 256, 00:21:37.105 "data_size": 7936 00:21:37.105 } 00:21:37.105 ] 00:21:37.105 }' 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.105 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:37.672 [2024-11-04 14:47:36.634865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.672 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:37.672 "name": "raid_bdev1", 00:21:37.672 "aliases": [ 00:21:37.672 "f435668d-5f6d-434e-ae70-19bada60278b" 00:21:37.672 ], 00:21:37.672 "product_name": "Raid Volume", 00:21:37.672 "block_size": 4128, 00:21:37.672 "num_blocks": 7936, 00:21:37.672 "uuid": "f435668d-5f6d-434e-ae70-19bada60278b", 00:21:37.672 "md_size": 32, 00:21:37.672 "md_interleave": true, 00:21:37.672 "dif_type": 0, 00:21:37.672 "assigned_rate_limits": { 00:21:37.672 "rw_ios_per_sec": 0, 00:21:37.672 "rw_mbytes_per_sec": 0, 00:21:37.672 "r_mbytes_per_sec": 0, 00:21:37.672 "w_mbytes_per_sec": 0 00:21:37.672 }, 00:21:37.672 "claimed": false, 00:21:37.672 "zoned": false, 00:21:37.672 "supported_io_types": { 00:21:37.672 "read": true, 00:21:37.672 "write": true, 00:21:37.672 "unmap": false, 00:21:37.672 "flush": false, 00:21:37.672 "reset": true, 00:21:37.672 "nvme_admin": false, 00:21:37.672 "nvme_io": false, 00:21:37.672 "nvme_io_md": false, 00:21:37.672 "write_zeroes": true, 00:21:37.672 "zcopy": false, 00:21:37.672 "get_zone_info": false, 00:21:37.672 "zone_management": false, 00:21:37.672 "zone_append": false, 00:21:37.672 "compare": false, 00:21:37.672 "compare_and_write": false, 00:21:37.672 "abort": false, 00:21:37.672 "seek_hole": false, 00:21:37.672 "seek_data": false, 00:21:37.672 "copy": false, 00:21:37.672 "nvme_iov_md": false 00:21:37.672 }, 00:21:37.672 "memory_domains": [ 00:21:37.672 { 00:21:37.672 "dma_device_id": "system", 00:21:37.672 "dma_device_type": 1 00:21:37.672 }, 00:21:37.672 { 00:21:37.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.672 "dma_device_type": 2 00:21:37.672 }, 00:21:37.672 { 00:21:37.672 "dma_device_id": "system", 00:21:37.672 "dma_device_type": 1 00:21:37.672 }, 00:21:37.672 { 00:21:37.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:37.672 "dma_device_type": 2 00:21:37.672 } 00:21:37.672 ], 00:21:37.672 "driver_specific": { 00:21:37.672 "raid": { 00:21:37.672 "uuid": "f435668d-5f6d-434e-ae70-19bada60278b", 00:21:37.672 "strip_size_kb": 0, 00:21:37.672 "state": "online", 00:21:37.672 "raid_level": "raid1", 00:21:37.672 "superblock": true, 00:21:37.672 "num_base_bdevs": 2, 00:21:37.672 "num_base_bdevs_discovered": 2, 00:21:37.672 "num_base_bdevs_operational": 2, 00:21:37.672 "base_bdevs_list": [ 00:21:37.672 { 00:21:37.672 "name": "pt1", 00:21:37.672 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:37.672 "is_configured": true, 00:21:37.672 "data_offset": 256, 00:21:37.672 "data_size": 7936 00:21:37.672 }, 00:21:37.672 { 00:21:37.672 "name": "pt2", 00:21:37.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.673 "is_configured": true, 00:21:37.673 "data_offset": 256, 00:21:37.673 "data_size": 7936 00:21:37.673 } 00:21:37.673 ] 00:21:37.673 } 00:21:37.673 } 00:21:37.673 }' 00:21:37.673 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:37.673 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:37.673 pt2' 00:21:37.673 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:37.673 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:21:37.673 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:37.673 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:37.673 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.673 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.673 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.932 [2024-11-04 14:47:36.899035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f435668d-5f6d-434e-ae70-19bada60278b '!=' f435668d-5f6d-434e-ae70-19bada60278b ']' 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.932 [2024-11-04 14:47:36.946731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:37.932 14:47:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.932 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:37.932 "name": "raid_bdev1", 00:21:37.932 "uuid": "f435668d-5f6d-434e-ae70-19bada60278b", 00:21:37.932 "strip_size_kb": 0, 00:21:37.932 "state": "online", 00:21:37.932 "raid_level": "raid1", 00:21:37.932 "superblock": true, 00:21:37.932 "num_base_bdevs": 2, 00:21:37.932 "num_base_bdevs_discovered": 1, 00:21:37.932 "num_base_bdevs_operational": 1, 00:21:37.932 "base_bdevs_list": [ 00:21:37.932 { 00:21:37.932 "name": null, 00:21:37.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.932 "is_configured": false, 00:21:37.932 "data_offset": 0, 00:21:37.932 "data_size": 7936 00:21:37.932 }, 00:21:37.932 { 00:21:37.932 "name": "pt2", 00:21:37.932 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.932 "is_configured": true, 00:21:37.932 "data_offset": 256, 00:21:37.932 "data_size": 7936 00:21:37.932 } 00:21:37.932 ] 00:21:37.932 }' 00:21:37.932 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:37.932 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.501 [2024-11-04 14:47:37.470901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:38.501 [2024-11-04 14:47:37.470949] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:38.501 [2024-11-04 14:47:37.471045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:38.501 [2024-11-04 14:47:37.471109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:38.501 [2024-11-04 14:47:37.471128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.501 [2024-11-04 14:47:37.546875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:38.501 [2024-11-04 14:47:37.546960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.501 [2024-11-04 14:47:37.546986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:38.501 [2024-11-04 14:47:37.547002] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.501 [2024-11-04 14:47:37.549610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.501 [2024-11-04 14:47:37.549670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:38.501 [2024-11-04 14:47:37.549753] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:38.501 [2024-11-04 14:47:37.549813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:38.501 [2024-11-04 14:47:37.549934] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:38.501 [2024-11-04 14:47:37.549973] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:38.501 [2024-11-04 14:47:37.550093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:38.501 [2024-11-04 14:47:37.550190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:38.501 [2024-11-04 14:47:37.550214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:38.501 [2024-11-04 14:47:37.550302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.501 pt2 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.501 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:38.501 "name": "raid_bdev1", 00:21:38.501 "uuid": "f435668d-5f6d-434e-ae70-19bada60278b", 00:21:38.501 "strip_size_kb": 0, 00:21:38.501 "state": "online", 00:21:38.501 "raid_level": "raid1", 00:21:38.501 "superblock": true, 00:21:38.501 "num_base_bdevs": 2, 00:21:38.501 "num_base_bdevs_discovered": 1, 00:21:38.501 "num_base_bdevs_operational": 1, 00:21:38.501 "base_bdevs_list": [ 00:21:38.501 { 00:21:38.501 "name": null, 00:21:38.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.501 "is_configured": false, 00:21:38.501 "data_offset": 256, 00:21:38.501 "data_size": 7936 00:21:38.501 }, 00:21:38.501 { 00:21:38.501 "name": "pt2", 00:21:38.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:38.501 "is_configured": true, 00:21:38.501 "data_offset": 256, 00:21:38.501 "data_size": 7936 00:21:38.501 } 00:21:38.502 ] 00:21:38.502 }' 00:21:38.502 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:38.502 14:47:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.069 [2024-11-04 14:47:38.079070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:39.069 [2024-11-04 14:47:38.079106] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:39.069 [2024-11-04 14:47:38.079192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.069 [2024-11-04 14:47:38.079266] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.069 [2024-11-04 14:47:38.079287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.069 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.069 [2024-11-04 14:47:38.143183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:39.069 [2024-11-04 14:47:38.143263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.069 [2024-11-04 14:47:38.143329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:39.069 [2024-11-04 14:47:38.143358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.069 [2024-11-04 14:47:38.146060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.069 [2024-11-04 14:47:38.146110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:39.069 [2024-11-04 14:47:38.146188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:39.069 [2024-11-04 14:47:38.146247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:39.069 [2024-11-04 14:47:38.146374] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:39.069 [2024-11-04 14:47:38.146391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:39.070 [2024-11-04 14:47:38.146416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:39.070 [2024-11-04 14:47:38.146485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:39.070 [2024-11-04 14:47:38.146599] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:39.070 [2024-11-04 14:47:38.146614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:39.070 [2024-11-04 14:47:38.146687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:39.070 [2024-11-04 14:47:38.146806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:39.070 [2024-11-04 14:47:38.146871] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:39.070 [2024-11-04 14:47:38.146994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.070 pt1 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.070 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.329 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.329 "name": "raid_bdev1", 00:21:39.329 "uuid": "f435668d-5f6d-434e-ae70-19bada60278b", 00:21:39.329 "strip_size_kb": 0, 00:21:39.329 "state": "online", 00:21:39.329 "raid_level": "raid1", 00:21:39.329 "superblock": true, 00:21:39.329 "num_base_bdevs": 2, 00:21:39.329 "num_base_bdevs_discovered": 1, 00:21:39.329 "num_base_bdevs_operational": 1, 00:21:39.329 "base_bdevs_list": [ 00:21:39.329 { 00:21:39.329 "name": null, 00:21:39.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.329 "is_configured": false, 00:21:39.329 "data_offset": 256, 00:21:39.329 "data_size": 7936 00:21:39.329 }, 00:21:39.329 { 00:21:39.329 "name": "pt2", 00:21:39.329 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.329 "is_configured": true, 00:21:39.329 "data_offset": 256, 00:21:39.329 "data_size": 7936 00:21:39.329 } 00:21:39.329 ] 00:21:39.329 }' 00:21:39.329 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.329 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.588 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:39.588 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:39.588 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.588 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.588 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:39.848 [2024-11-04 14:47:38.727673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f435668d-5f6d-434e-ae70-19bada60278b '!=' f435668d-5f6d-434e-ae70-19bada60278b ']' 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89165 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89165 ']' 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89165 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89165 00:21:39.848 killing process with pid 89165 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89165' 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 89165 00:21:39.848 [2024-11-04 14:47:38.808688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:39.848 14:47:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 89165 00:21:39.848 [2024-11-04 14:47:38.808796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.848 [2024-11-04 14:47:38.808859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.848 [2024-11-04 14:47:38.808891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:40.107 [2024-11-04 14:47:39.005329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:41.044 14:47:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:21:41.044 00:21:41.044 real 0m6.662s 00:21:41.044 user 0m10.503s 00:21:41.044 sys 0m1.016s 00:21:41.044 14:47:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:41.044 ************************************ 00:21:41.044 END TEST raid_superblock_test_md_interleaved 00:21:41.044 ************************************ 00:21:41.044 14:47:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:41.044 14:47:40 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:21:41.044 14:47:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:41.044 14:47:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:41.044 14:47:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:41.044 ************************************ 00:21:41.044 START TEST raid_rebuild_test_sb_md_interleaved 00:21:41.044 ************************************ 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:41.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89489 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89489 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89489 ']' 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:41.044 14:47:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:41.303 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:41.303 Zero copy mechanism will not be used. 00:21:41.303 [2024-11-04 14:47:40.220216] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:21:41.303 [2024-11-04 14:47:40.220364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89489 ] 00:21:41.303 [2024-11-04 14:47:40.396338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.562 [2024-11-04 14:47:40.538114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.839 [2024-11-04 14:47:40.752555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:41.839 [2024-11-04 14:47:40.752778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.417 BaseBdev1_malloc 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.417 [2024-11-04 14:47:41.288227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:42.417 [2024-11-04 14:47:41.288303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.417 [2024-11-04 14:47:41.288332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:42.417 [2024-11-04 14:47:41.288350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.417 [2024-11-04 14:47:41.291320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.417 [2024-11-04 14:47:41.291371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:42.417 BaseBdev1 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.417 BaseBdev2_malloc 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.417 [2024-11-04 14:47:41.346224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:42.417 [2024-11-04 14:47:41.346306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.417 [2024-11-04 14:47:41.346335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:42.417 [2024-11-04 14:47:41.346354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.417 [2024-11-04 14:47:41.348971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.417 [2024-11-04 14:47:41.349064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:42.417 BaseBdev2 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.417 spare_malloc 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.417 spare_delay 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.417 [2024-11-04 14:47:41.420650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:42.417 [2024-11-04 14:47:41.420724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.417 [2024-11-04 14:47:41.420755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:42.417 [2024-11-04 14:47:41.420773] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.417 [2024-11-04 14:47:41.423257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.417 [2024-11-04 14:47:41.423309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:42.417 spare 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.417 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.417 [2024-11-04 14:47:41.428724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:42.417 [2024-11-04 14:47:41.431376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:42.417 [2024-11-04 14:47:41.431607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:42.417 [2024-11-04 14:47:41.431630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:42.417 [2024-11-04 14:47:41.431727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:42.417 [2024-11-04 14:47:41.431842] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:42.417 [2024-11-04 14:47:41.431856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:42.418 [2024-11-04 14:47:41.432140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.418 "name": "raid_bdev1", 00:21:42.418 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:42.418 "strip_size_kb": 0, 00:21:42.418 "state": "online", 00:21:42.418 "raid_level": "raid1", 00:21:42.418 "superblock": true, 00:21:42.418 "num_base_bdevs": 2, 00:21:42.418 "num_base_bdevs_discovered": 2, 00:21:42.418 "num_base_bdevs_operational": 2, 00:21:42.418 "base_bdevs_list": [ 00:21:42.418 { 00:21:42.418 "name": "BaseBdev1", 00:21:42.418 "uuid": "591e59c8-7c66-5339-8905-a0f46b71377f", 00:21:42.418 "is_configured": true, 00:21:42.418 "data_offset": 256, 00:21:42.418 "data_size": 7936 00:21:42.418 }, 00:21:42.418 { 00:21:42.418 "name": "BaseBdev2", 00:21:42.418 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:42.418 "is_configured": true, 00:21:42.418 "data_offset": 256, 00:21:42.418 "data_size": 7936 00:21:42.418 } 00:21:42.418 ] 00:21:42.418 }' 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.418 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.985 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:42.985 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:42.985 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.985 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.985 [2024-11-04 14:47:41.949304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.985 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.985 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:42.985 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.985 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.985 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.985 14:47:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.985 [2024-11-04 14:47:42.056946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:42.985 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.243 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.243 "name": "raid_bdev1", 00:21:43.243 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:43.243 "strip_size_kb": 0, 00:21:43.243 "state": "online", 00:21:43.243 "raid_level": "raid1", 00:21:43.243 "superblock": true, 00:21:43.243 "num_base_bdevs": 2, 00:21:43.243 "num_base_bdevs_discovered": 1, 00:21:43.243 "num_base_bdevs_operational": 1, 00:21:43.243 "base_bdevs_list": [ 00:21:43.243 { 00:21:43.243 "name": null, 00:21:43.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.243 "is_configured": false, 00:21:43.243 "data_offset": 0, 00:21:43.243 "data_size": 7936 00:21:43.243 }, 00:21:43.243 { 00:21:43.243 "name": "BaseBdev2", 00:21:43.243 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:43.243 "is_configured": true, 00:21:43.243 "data_offset": 256, 00:21:43.243 "data_size": 7936 00:21:43.243 } 00:21:43.243 ] 00:21:43.243 }' 00:21:43.243 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.243 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.502 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.502 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.502 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:43.502 [2024-11-04 14:47:42.609125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.760 [2024-11-04 14:47:42.626748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:43.760 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.760 14:47:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:43.760 [2024-11-04 14:47:42.629634] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.696 "name": "raid_bdev1", 00:21:44.696 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:44.696 "strip_size_kb": 0, 00:21:44.696 "state": "online", 00:21:44.696 "raid_level": "raid1", 00:21:44.696 "superblock": true, 00:21:44.696 "num_base_bdevs": 2, 00:21:44.696 "num_base_bdevs_discovered": 2, 00:21:44.696 "num_base_bdevs_operational": 2, 00:21:44.696 "process": { 00:21:44.696 "type": "rebuild", 00:21:44.696 "target": "spare", 00:21:44.696 "progress": { 00:21:44.696 "blocks": 2560, 00:21:44.696 "percent": 32 00:21:44.696 } 00:21:44.696 }, 00:21:44.696 "base_bdevs_list": [ 00:21:44.696 { 00:21:44.696 "name": "spare", 00:21:44.696 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:44.696 "is_configured": true, 00:21:44.696 "data_offset": 256, 00:21:44.696 "data_size": 7936 00:21:44.696 }, 00:21:44.696 { 00:21:44.696 "name": "BaseBdev2", 00:21:44.696 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:44.696 "is_configured": true, 00:21:44.696 "data_offset": 256, 00:21:44.696 "data_size": 7936 00:21:44.696 } 00:21:44.696 ] 00:21:44.696 }' 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.696 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:44.696 [2024-11-04 14:47:43.787558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.954 [2024-11-04 14:47:43.838809] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:44.954 [2024-11-04 14:47:43.839109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.954 [2024-11-04 14:47:43.839137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.954 [2024-11-04 14:47:43.839170] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.954 "name": "raid_bdev1", 00:21:44.954 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:44.954 "strip_size_kb": 0, 00:21:44.954 "state": "online", 00:21:44.954 "raid_level": "raid1", 00:21:44.954 "superblock": true, 00:21:44.954 "num_base_bdevs": 2, 00:21:44.954 "num_base_bdevs_discovered": 1, 00:21:44.954 "num_base_bdevs_operational": 1, 00:21:44.954 "base_bdevs_list": [ 00:21:44.954 { 00:21:44.954 "name": null, 00:21:44.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.954 "is_configured": false, 00:21:44.954 "data_offset": 0, 00:21:44.954 "data_size": 7936 00:21:44.954 }, 00:21:44.954 { 00:21:44.954 "name": "BaseBdev2", 00:21:44.954 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:44.954 "is_configured": true, 00:21:44.954 "data_offset": 256, 00:21:44.954 "data_size": 7936 00:21:44.954 } 00:21:44.954 ] 00:21:44.954 }' 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.954 14:47:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.521 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:45.522 "name": "raid_bdev1", 00:21:45.522 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:45.522 "strip_size_kb": 0, 00:21:45.522 "state": "online", 00:21:45.522 "raid_level": "raid1", 00:21:45.522 "superblock": true, 00:21:45.522 "num_base_bdevs": 2, 00:21:45.522 "num_base_bdevs_discovered": 1, 00:21:45.522 "num_base_bdevs_operational": 1, 00:21:45.522 "base_bdevs_list": [ 00:21:45.522 { 00:21:45.522 "name": null, 00:21:45.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.522 "is_configured": false, 00:21:45.522 "data_offset": 0, 00:21:45.522 "data_size": 7936 00:21:45.522 }, 00:21:45.522 { 00:21:45.522 "name": "BaseBdev2", 00:21:45.522 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:45.522 "is_configured": true, 00:21:45.522 "data_offset": 256, 00:21:45.522 "data_size": 7936 00:21:45.522 } 00:21:45.522 ] 00:21:45.522 }' 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:45.522 [2024-11-04 14:47:44.557628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.522 [2024-11-04 14:47:44.573943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.522 14:47:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:45.522 [2024-11-04 14:47:44.576507] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.899 "name": "raid_bdev1", 00:21:46.899 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:46.899 "strip_size_kb": 0, 00:21:46.899 "state": "online", 00:21:46.899 "raid_level": "raid1", 00:21:46.899 "superblock": true, 00:21:46.899 "num_base_bdevs": 2, 00:21:46.899 "num_base_bdevs_discovered": 2, 00:21:46.899 "num_base_bdevs_operational": 2, 00:21:46.899 "process": { 00:21:46.899 "type": "rebuild", 00:21:46.899 "target": "spare", 00:21:46.899 "progress": { 00:21:46.899 "blocks": 2560, 00:21:46.899 "percent": 32 00:21:46.899 } 00:21:46.899 }, 00:21:46.899 "base_bdevs_list": [ 00:21:46.899 { 00:21:46.899 "name": "spare", 00:21:46.899 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:46.899 "is_configured": true, 00:21:46.899 "data_offset": 256, 00:21:46.899 "data_size": 7936 00:21:46.899 }, 00:21:46.899 { 00:21:46.899 "name": "BaseBdev2", 00:21:46.899 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:46.899 "is_configured": true, 00:21:46.899 "data_offset": 256, 00:21:46.899 "data_size": 7936 00:21:46.899 } 00:21:46.899 ] 00:21:46.899 }' 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:46.899 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:46.899 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=798 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.900 "name": "raid_bdev1", 00:21:46.900 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:46.900 "strip_size_kb": 0, 00:21:46.900 "state": "online", 00:21:46.900 "raid_level": "raid1", 00:21:46.900 "superblock": true, 00:21:46.900 "num_base_bdevs": 2, 00:21:46.900 "num_base_bdevs_discovered": 2, 00:21:46.900 "num_base_bdevs_operational": 2, 00:21:46.900 "process": { 00:21:46.900 "type": "rebuild", 00:21:46.900 "target": "spare", 00:21:46.900 "progress": { 00:21:46.900 "blocks": 2816, 00:21:46.900 "percent": 35 00:21:46.900 } 00:21:46.900 }, 00:21:46.900 "base_bdevs_list": [ 00:21:46.900 { 00:21:46.900 "name": "spare", 00:21:46.900 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:46.900 "is_configured": true, 00:21:46.900 "data_offset": 256, 00:21:46.900 "data_size": 7936 00:21:46.900 }, 00:21:46.900 { 00:21:46.900 "name": "BaseBdev2", 00:21:46.900 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:46.900 "is_configured": true, 00:21:46.900 "data_offset": 256, 00:21:46.900 "data_size": 7936 00:21:46.900 } 00:21:46.900 ] 00:21:46.900 }' 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.900 14:47:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:47.837 "name": "raid_bdev1", 00:21:47.837 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:47.837 "strip_size_kb": 0, 00:21:47.837 "state": "online", 00:21:47.837 "raid_level": "raid1", 00:21:47.837 "superblock": true, 00:21:47.837 "num_base_bdevs": 2, 00:21:47.837 "num_base_bdevs_discovered": 2, 00:21:47.837 "num_base_bdevs_operational": 2, 00:21:47.837 "process": { 00:21:47.837 "type": "rebuild", 00:21:47.837 "target": "spare", 00:21:47.837 "progress": { 00:21:47.837 "blocks": 5888, 00:21:47.837 "percent": 74 00:21:47.837 } 00:21:47.837 }, 00:21:47.837 "base_bdevs_list": [ 00:21:47.837 { 00:21:47.837 "name": "spare", 00:21:47.837 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:47.837 "is_configured": true, 00:21:47.837 "data_offset": 256, 00:21:47.837 "data_size": 7936 00:21:47.837 }, 00:21:47.837 { 00:21:47.837 "name": "BaseBdev2", 00:21:47.837 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:47.837 "is_configured": true, 00:21:47.837 "data_offset": 256, 00:21:47.837 "data_size": 7936 00:21:47.837 } 00:21:47.837 ] 00:21:47.837 }' 00:21:47.837 14:47:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:48.096 14:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.096 14:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:48.096 14:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.096 14:47:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:48.664 [2024-11-04 14:47:47.697314] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:48.664 [2024-11-04 14:47:47.697391] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:48.664 [2024-11-04 14:47:47.697534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.232 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:49.232 "name": "raid_bdev1", 00:21:49.232 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:49.232 "strip_size_kb": 0, 00:21:49.232 "state": "online", 00:21:49.232 "raid_level": "raid1", 00:21:49.232 "superblock": true, 00:21:49.232 "num_base_bdevs": 2, 00:21:49.232 "num_base_bdevs_discovered": 2, 00:21:49.232 "num_base_bdevs_operational": 2, 00:21:49.232 "base_bdevs_list": [ 00:21:49.232 { 00:21:49.232 "name": "spare", 00:21:49.232 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:49.232 "is_configured": true, 00:21:49.232 "data_offset": 256, 00:21:49.232 "data_size": 7936 00:21:49.232 }, 00:21:49.232 { 00:21:49.233 "name": "BaseBdev2", 00:21:49.233 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:49.233 "is_configured": true, 00:21:49.233 "data_offset": 256, 00:21:49.233 "data_size": 7936 00:21:49.233 } 00:21:49.233 ] 00:21:49.233 }' 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:49.233 "name": "raid_bdev1", 00:21:49.233 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:49.233 "strip_size_kb": 0, 00:21:49.233 "state": "online", 00:21:49.233 "raid_level": "raid1", 00:21:49.233 "superblock": true, 00:21:49.233 "num_base_bdevs": 2, 00:21:49.233 "num_base_bdevs_discovered": 2, 00:21:49.233 "num_base_bdevs_operational": 2, 00:21:49.233 "base_bdevs_list": [ 00:21:49.233 { 00:21:49.233 "name": "spare", 00:21:49.233 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:49.233 "is_configured": true, 00:21:49.233 "data_offset": 256, 00:21:49.233 "data_size": 7936 00:21:49.233 }, 00:21:49.233 { 00:21:49.233 "name": "BaseBdev2", 00:21:49.233 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:49.233 "is_configured": true, 00:21:49.233 "data_offset": 256, 00:21:49.233 "data_size": 7936 00:21:49.233 } 00:21:49.233 ] 00:21:49.233 }' 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:49.233 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.493 "name": "raid_bdev1", 00:21:49.493 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:49.493 "strip_size_kb": 0, 00:21:49.493 "state": "online", 00:21:49.493 "raid_level": "raid1", 00:21:49.493 "superblock": true, 00:21:49.493 "num_base_bdevs": 2, 00:21:49.493 "num_base_bdevs_discovered": 2, 00:21:49.493 "num_base_bdevs_operational": 2, 00:21:49.493 "base_bdevs_list": [ 00:21:49.493 { 00:21:49.493 "name": "spare", 00:21:49.493 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:49.493 "is_configured": true, 00:21:49.493 "data_offset": 256, 00:21:49.493 "data_size": 7936 00:21:49.493 }, 00:21:49.493 { 00:21:49.493 "name": "BaseBdev2", 00:21:49.493 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:49.493 "is_configured": true, 00:21:49.493 "data_offset": 256, 00:21:49.493 "data_size": 7936 00:21:49.493 } 00:21:49.493 ] 00:21:49.493 }' 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.493 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.070 [2024-11-04 14:47:48.891004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:50.070 [2024-11-04 14:47:48.891194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:50.070 [2024-11-04 14:47:48.891349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:50.070 [2024-11-04 14:47:48.891436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:50.070 [2024-11-04 14:47:48.891455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.070 [2024-11-04 14:47:48.950942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:50.070 [2024-11-04 14:47:48.951026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:50.070 [2024-11-04 14:47:48.951059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:50.070 [2024-11-04 14:47:48.951073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:50.070 [2024-11-04 14:47:48.953794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:50.070 [2024-11-04 14:47:48.953838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:50.070 [2024-11-04 14:47:48.953912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:50.070 [2024-11-04 14:47:48.953995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:50.070 [2024-11-04 14:47:48.954153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:50.070 spare 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.070 14:47:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.070 [2024-11-04 14:47:49.054287] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:50.070 [2024-11-04 14:47:49.054589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:50.070 [2024-11-04 14:47:49.054745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:50.070 [2024-11-04 14:47:49.054884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:50.070 [2024-11-04 14:47:49.054901] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:50.070 [2024-11-04 14:47:49.055070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.070 "name": "raid_bdev1", 00:21:50.070 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:50.070 "strip_size_kb": 0, 00:21:50.070 "state": "online", 00:21:50.070 "raid_level": "raid1", 00:21:50.070 "superblock": true, 00:21:50.070 "num_base_bdevs": 2, 00:21:50.070 "num_base_bdevs_discovered": 2, 00:21:50.070 "num_base_bdevs_operational": 2, 00:21:50.070 "base_bdevs_list": [ 00:21:50.070 { 00:21:50.070 "name": "spare", 00:21:50.070 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:50.070 "is_configured": true, 00:21:50.070 "data_offset": 256, 00:21:50.070 "data_size": 7936 00:21:50.070 }, 00:21:50.070 { 00:21:50.070 "name": "BaseBdev2", 00:21:50.070 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:50.070 "is_configured": true, 00:21:50.070 "data_offset": 256, 00:21:50.070 "data_size": 7936 00:21:50.070 } 00:21:50.070 ] 00:21:50.070 }' 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.070 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.639 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:50.639 "name": "raid_bdev1", 00:21:50.639 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:50.639 "strip_size_kb": 0, 00:21:50.639 "state": "online", 00:21:50.639 "raid_level": "raid1", 00:21:50.640 "superblock": true, 00:21:50.640 "num_base_bdevs": 2, 00:21:50.640 "num_base_bdevs_discovered": 2, 00:21:50.640 "num_base_bdevs_operational": 2, 00:21:50.640 "base_bdevs_list": [ 00:21:50.640 { 00:21:50.640 "name": "spare", 00:21:50.640 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:50.640 "is_configured": true, 00:21:50.640 "data_offset": 256, 00:21:50.640 "data_size": 7936 00:21:50.640 }, 00:21:50.640 { 00:21:50.640 "name": "BaseBdev2", 00:21:50.640 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:50.640 "is_configured": true, 00:21:50.640 "data_offset": 256, 00:21:50.640 "data_size": 7936 00:21:50.640 } 00:21:50.640 ] 00:21:50.640 }' 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.640 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.900 [2024-11-04 14:47:49.763437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.900 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.900 "name": "raid_bdev1", 00:21:50.900 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:50.900 "strip_size_kb": 0, 00:21:50.900 "state": "online", 00:21:50.900 "raid_level": "raid1", 00:21:50.900 "superblock": true, 00:21:50.900 "num_base_bdevs": 2, 00:21:50.900 "num_base_bdevs_discovered": 1, 00:21:50.900 "num_base_bdevs_operational": 1, 00:21:50.900 "base_bdevs_list": [ 00:21:50.900 { 00:21:50.900 "name": null, 00:21:50.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.900 "is_configured": false, 00:21:50.901 "data_offset": 0, 00:21:50.901 "data_size": 7936 00:21:50.901 }, 00:21:50.901 { 00:21:50.901 "name": "BaseBdev2", 00:21:50.901 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:50.901 "is_configured": true, 00:21:50.901 "data_offset": 256, 00:21:50.901 "data_size": 7936 00:21:50.901 } 00:21:50.901 ] 00:21:50.901 }' 00:21:50.901 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.901 14:47:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.159 14:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:51.159 14:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.159 14:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:51.159 [2024-11-04 14:47:50.255576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.159 [2024-11-04 14:47:50.255798] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:51.159 [2024-11-04 14:47:50.255823] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:51.159 [2024-11-04 14:47:50.255888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.159 [2024-11-04 14:47:50.271571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:51.159 14:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.159 14:47:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:51.159 [2024-11-04 14:47:50.274206] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.586 "name": "raid_bdev1", 00:21:52.586 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:52.586 "strip_size_kb": 0, 00:21:52.586 "state": "online", 00:21:52.586 "raid_level": "raid1", 00:21:52.586 "superblock": true, 00:21:52.586 "num_base_bdevs": 2, 00:21:52.586 "num_base_bdevs_discovered": 2, 00:21:52.586 "num_base_bdevs_operational": 2, 00:21:52.586 "process": { 00:21:52.586 "type": "rebuild", 00:21:52.586 "target": "spare", 00:21:52.586 "progress": { 00:21:52.586 "blocks": 2560, 00:21:52.586 "percent": 32 00:21:52.586 } 00:21:52.586 }, 00:21:52.586 "base_bdevs_list": [ 00:21:52.586 { 00:21:52.586 "name": "spare", 00:21:52.586 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:52.586 "is_configured": true, 00:21:52.586 "data_offset": 256, 00:21:52.586 "data_size": 7936 00:21:52.586 }, 00:21:52.586 { 00:21:52.586 "name": "BaseBdev2", 00:21:52.586 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:52.586 "is_configured": true, 00:21:52.586 "data_offset": 256, 00:21:52.586 "data_size": 7936 00:21:52.586 } 00:21:52.586 ] 00:21:52.586 }' 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:52.586 [2024-11-04 14:47:51.439883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.586 [2024-11-04 14:47:51.482727] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:52.586 [2024-11-04 14:47:51.483019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:52.586 [2024-11-04 14:47:51.483047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:52.586 [2024-11-04 14:47:51.483063] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.586 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:52.587 "name": "raid_bdev1", 00:21:52.587 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:52.587 "strip_size_kb": 0, 00:21:52.587 "state": "online", 00:21:52.587 "raid_level": "raid1", 00:21:52.587 "superblock": true, 00:21:52.587 "num_base_bdevs": 2, 00:21:52.587 "num_base_bdevs_discovered": 1, 00:21:52.587 "num_base_bdevs_operational": 1, 00:21:52.587 "base_bdevs_list": [ 00:21:52.587 { 00:21:52.587 "name": null, 00:21:52.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.587 "is_configured": false, 00:21:52.587 "data_offset": 0, 00:21:52.587 "data_size": 7936 00:21:52.587 }, 00:21:52.587 { 00:21:52.587 "name": "BaseBdev2", 00:21:52.587 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:52.587 "is_configured": true, 00:21:52.587 "data_offset": 256, 00:21:52.587 "data_size": 7936 00:21:52.587 } 00:21:52.587 ] 00:21:52.587 }' 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:52.587 14:47:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:53.153 14:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:53.153 14:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.153 14:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:53.153 [2024-11-04 14:47:52.061465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:53.154 [2024-11-04 14:47:52.061553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.154 [2024-11-04 14:47:52.061582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:53.154 [2024-11-04 14:47:52.061598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.154 [2024-11-04 14:47:52.061877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.154 [2024-11-04 14:47:52.061906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:53.154 [2024-11-04 14:47:52.061979] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:53.154 [2024-11-04 14:47:52.062025] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:53.154 [2024-11-04 14:47:52.062040] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:53.154 [2024-11-04 14:47:52.062079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:53.154 [2024-11-04 14:47:52.077269] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:53.154 spare 00:21:53.154 14:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.154 14:47:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:53.154 [2024-11-04 14:47:52.079834] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.090 "name": "raid_bdev1", 00:21:54.090 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:54.090 "strip_size_kb": 0, 00:21:54.090 "state": "online", 00:21:54.090 "raid_level": "raid1", 00:21:54.090 "superblock": true, 00:21:54.090 "num_base_bdevs": 2, 00:21:54.090 "num_base_bdevs_discovered": 2, 00:21:54.090 "num_base_bdevs_operational": 2, 00:21:54.090 "process": { 00:21:54.090 "type": "rebuild", 00:21:54.090 "target": "spare", 00:21:54.090 "progress": { 00:21:54.090 "blocks": 2560, 00:21:54.090 "percent": 32 00:21:54.090 } 00:21:54.090 }, 00:21:54.090 "base_bdevs_list": [ 00:21:54.090 { 00:21:54.090 "name": "spare", 00:21:54.090 "uuid": "a4ac1534-fb20-57ba-980a-09df10fabef1", 00:21:54.090 "is_configured": true, 00:21:54.090 "data_offset": 256, 00:21:54.090 "data_size": 7936 00:21:54.090 }, 00:21:54.090 { 00:21:54.090 "name": "BaseBdev2", 00:21:54.090 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:54.090 "is_configured": true, 00:21:54.090 "data_offset": 256, 00:21:54.090 "data_size": 7936 00:21:54.090 } 00:21:54.090 ] 00:21:54.090 }' 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:54.090 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.349 [2024-11-04 14:47:53.245171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:54.349 [2024-11-04 14:47:53.288355] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:54.349 [2024-11-04 14:47:53.288457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.349 [2024-11-04 14:47:53.288484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:54.349 [2024-11-04 14:47:53.288496] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.349 "name": "raid_bdev1", 00:21:54.349 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:54.349 "strip_size_kb": 0, 00:21:54.349 "state": "online", 00:21:54.349 "raid_level": "raid1", 00:21:54.349 "superblock": true, 00:21:54.349 "num_base_bdevs": 2, 00:21:54.349 "num_base_bdevs_discovered": 1, 00:21:54.349 "num_base_bdevs_operational": 1, 00:21:54.349 "base_bdevs_list": [ 00:21:54.349 { 00:21:54.349 "name": null, 00:21:54.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.349 "is_configured": false, 00:21:54.349 "data_offset": 0, 00:21:54.349 "data_size": 7936 00:21:54.349 }, 00:21:54.349 { 00:21:54.349 "name": "BaseBdev2", 00:21:54.349 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:54.349 "is_configured": true, 00:21:54.349 "data_offset": 256, 00:21:54.349 "data_size": 7936 00:21:54.349 } 00:21:54.349 ] 00:21:54.349 }' 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.349 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.939 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.939 "name": "raid_bdev1", 00:21:54.940 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:54.940 "strip_size_kb": 0, 00:21:54.940 "state": "online", 00:21:54.940 "raid_level": "raid1", 00:21:54.940 "superblock": true, 00:21:54.940 "num_base_bdevs": 2, 00:21:54.940 "num_base_bdevs_discovered": 1, 00:21:54.940 "num_base_bdevs_operational": 1, 00:21:54.940 "base_bdevs_list": [ 00:21:54.940 { 00:21:54.940 "name": null, 00:21:54.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.940 "is_configured": false, 00:21:54.940 "data_offset": 0, 00:21:54.940 "data_size": 7936 00:21:54.940 }, 00:21:54.940 { 00:21:54.940 "name": "BaseBdev2", 00:21:54.940 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:54.940 "is_configured": true, 00:21:54.940 "data_offset": 256, 00:21:54.940 "data_size": 7936 00:21:54.940 } 00:21:54.940 ] 00:21:54.940 }' 00:21:54.940 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.940 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:54.940 14:47:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.940 14:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:54.940 14:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:54.940 14:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.940 14:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.940 14:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.940 14:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:54.940 14:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.940 14:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:54.940 [2024-11-04 14:47:54.020903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:54.940 [2024-11-04 14:47:54.021015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.940 [2024-11-04 14:47:54.021050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:54.940 [2024-11-04 14:47:54.021066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.940 [2024-11-04 14:47:54.021270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.940 [2024-11-04 14:47:54.021310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:54.940 [2024-11-04 14:47:54.021380] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:54.940 [2024-11-04 14:47:54.021406] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:54.940 [2024-11-04 14:47:54.021421] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:54.940 [2024-11-04 14:47:54.021437] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:54.940 BaseBdev1 00:21:54.940 14:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.940 14:47:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.317 "name": "raid_bdev1", 00:21:56.317 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:56.317 "strip_size_kb": 0, 00:21:56.317 "state": "online", 00:21:56.317 "raid_level": "raid1", 00:21:56.317 "superblock": true, 00:21:56.317 "num_base_bdevs": 2, 00:21:56.317 "num_base_bdevs_discovered": 1, 00:21:56.317 "num_base_bdevs_operational": 1, 00:21:56.317 "base_bdevs_list": [ 00:21:56.317 { 00:21:56.317 "name": null, 00:21:56.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.317 "is_configured": false, 00:21:56.317 "data_offset": 0, 00:21:56.317 "data_size": 7936 00:21:56.317 }, 00:21:56.317 { 00:21:56.317 "name": "BaseBdev2", 00:21:56.317 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:56.317 "is_configured": true, 00:21:56.317 "data_offset": 256, 00:21:56.317 "data_size": 7936 00:21:56.317 } 00:21:56.317 ] 00:21:56.317 }' 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.317 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.576 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:56.577 "name": "raid_bdev1", 00:21:56.577 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:56.577 "strip_size_kb": 0, 00:21:56.577 "state": "online", 00:21:56.577 "raid_level": "raid1", 00:21:56.577 "superblock": true, 00:21:56.577 "num_base_bdevs": 2, 00:21:56.577 "num_base_bdevs_discovered": 1, 00:21:56.577 "num_base_bdevs_operational": 1, 00:21:56.577 "base_bdevs_list": [ 00:21:56.577 { 00:21:56.577 "name": null, 00:21:56.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.577 "is_configured": false, 00:21:56.577 "data_offset": 0, 00:21:56.577 "data_size": 7936 00:21:56.577 }, 00:21:56.577 { 00:21:56.577 "name": "BaseBdev2", 00:21:56.577 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:56.577 "is_configured": true, 00:21:56.577 "data_offset": 256, 00:21:56.577 "data_size": 7936 00:21:56.577 } 00:21:56.577 ] 00:21:56.577 }' 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:56.577 [2024-11-04 14:47:55.673552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:56.577 [2024-11-04 14:47:55.673718] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:56.577 [2024-11-04 14:47:55.673743] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:56.577 request: 00:21:56.577 { 00:21:56.577 "base_bdev": "BaseBdev1", 00:21:56.577 "raid_bdev": "raid_bdev1", 00:21:56.577 "method": "bdev_raid_add_base_bdev", 00:21:56.577 "req_id": 1 00:21:56.577 } 00:21:56.577 Got JSON-RPC error response 00:21:56.577 response: 00:21:56.577 { 00:21:56.577 "code": -22, 00:21:56.577 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:56.577 } 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:56.577 14:47:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.950 "name": "raid_bdev1", 00:21:57.950 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:57.950 "strip_size_kb": 0, 00:21:57.950 "state": "online", 00:21:57.950 "raid_level": "raid1", 00:21:57.950 "superblock": true, 00:21:57.950 "num_base_bdevs": 2, 00:21:57.950 "num_base_bdevs_discovered": 1, 00:21:57.950 "num_base_bdevs_operational": 1, 00:21:57.950 "base_bdevs_list": [ 00:21:57.950 { 00:21:57.950 "name": null, 00:21:57.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.950 "is_configured": false, 00:21:57.950 "data_offset": 0, 00:21:57.950 "data_size": 7936 00:21:57.950 }, 00:21:57.950 { 00:21:57.950 "name": "BaseBdev2", 00:21:57.950 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:57.950 "is_configured": true, 00:21:57.950 "data_offset": 256, 00:21:57.950 "data_size": 7936 00:21:57.950 } 00:21:57.950 ] 00:21:57.950 }' 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.950 14:47:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:58.209 "name": "raid_bdev1", 00:21:58.209 "uuid": "9c039e51-6b25-42cd-8efe-d602a3d097a3", 00:21:58.209 "strip_size_kb": 0, 00:21:58.209 "state": "online", 00:21:58.209 "raid_level": "raid1", 00:21:58.209 "superblock": true, 00:21:58.209 "num_base_bdevs": 2, 00:21:58.209 "num_base_bdevs_discovered": 1, 00:21:58.209 "num_base_bdevs_operational": 1, 00:21:58.209 "base_bdevs_list": [ 00:21:58.209 { 00:21:58.209 "name": null, 00:21:58.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.209 "is_configured": false, 00:21:58.209 "data_offset": 0, 00:21:58.209 "data_size": 7936 00:21:58.209 }, 00:21:58.209 { 00:21:58.209 "name": "BaseBdev2", 00:21:58.209 "uuid": "068100d8-1081-5e81-9c17-6112396eac18", 00:21:58.209 "is_configured": true, 00:21:58.209 "data_offset": 256, 00:21:58.209 "data_size": 7936 00:21:58.209 } 00:21:58.209 ] 00:21:58.209 }' 00:21:58.209 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89489 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89489 ']' 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89489 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89489 00:21:58.491 killing process with pid 89489 00:21:58.491 Received shutdown signal, test time was about 60.000000 seconds 00:21:58.491 00:21:58.491 Latency(us) 00:21:58.491 [2024-11-04T14:47:57.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.491 [2024-11-04T14:47:57.614Z] =================================================================================================================== 00:21:58.491 [2024-11-04T14:47:57.614Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89489' 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89489 00:21:58.491 [2024-11-04 14:47:57.426289] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:58.491 [2024-11-04 14:47:57.426470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:58.491 14:47:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89489 00:21:58.491 [2024-11-04 14:47:57.426530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:58.491 [2024-11-04 14:47:57.426548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:58.750 [2024-11-04 14:47:57.672887] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:59.685 14:47:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:21:59.686 00:21:59.686 real 0m18.534s 00:21:59.686 user 0m25.361s 00:21:59.686 sys 0m1.408s 00:21:59.686 14:47:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:59.686 ************************************ 00:21:59.686 END TEST raid_rebuild_test_sb_md_interleaved 00:21:59.686 ************************************ 00:21:59.686 14:47:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:59.686 14:47:58 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:21:59.686 14:47:58 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:21:59.686 14:47:58 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89489 ']' 00:21:59.686 14:47:58 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89489 00:21:59.686 14:47:58 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:21:59.686 00:21:59.686 real 13m1.315s 00:21:59.686 user 18m27.179s 00:21:59.686 sys 1m44.508s 00:21:59.686 14:47:58 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:59.686 ************************************ 00:21:59.686 END TEST bdev_raid 00:21:59.686 ************************************ 00:21:59.686 14:47:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:59.686 14:47:58 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:59.686 14:47:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:21:59.686 14:47:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:59.686 14:47:58 -- common/autotest_common.sh@10 -- # set +x 00:21:59.686 ************************************ 00:21:59.686 START TEST spdkcli_raid 00:21:59.686 ************************************ 00:21:59.686 14:47:58 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:59.945 * Looking for test storage... 00:21:59.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:59.945 14:47:58 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:59.945 14:47:58 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:21:59.945 14:47:58 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:59.945 14:47:58 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:21:59.945 14:47:58 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.946 14:47:58 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:21:59.946 14:47:58 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.946 14:47:58 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.946 14:47:58 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.946 14:47:58 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:59.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.946 --rc genhtml_branch_coverage=1 00:21:59.946 --rc genhtml_function_coverage=1 00:21:59.946 --rc genhtml_legend=1 00:21:59.946 --rc geninfo_all_blocks=1 00:21:59.946 --rc geninfo_unexecuted_blocks=1 00:21:59.946 00:21:59.946 ' 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:59.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.946 --rc genhtml_branch_coverage=1 00:21:59.946 --rc genhtml_function_coverage=1 00:21:59.946 --rc genhtml_legend=1 00:21:59.946 --rc geninfo_all_blocks=1 00:21:59.946 --rc geninfo_unexecuted_blocks=1 00:21:59.946 00:21:59.946 ' 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:59.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.946 --rc genhtml_branch_coverage=1 00:21:59.946 --rc genhtml_function_coverage=1 00:21:59.946 --rc genhtml_legend=1 00:21:59.946 --rc geninfo_all_blocks=1 00:21:59.946 --rc geninfo_unexecuted_blocks=1 00:21:59.946 00:21:59.946 ' 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:59.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.946 --rc genhtml_branch_coverage=1 00:21:59.946 --rc genhtml_function_coverage=1 00:21:59.946 --rc genhtml_legend=1 00:21:59.946 --rc geninfo_all_blocks=1 00:21:59.946 --rc geninfo_unexecuted_blocks=1 00:21:59.946 00:21:59.946 ' 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:59.946 14:47:58 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90174 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90174 00:21:59.946 14:47:58 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 90174 ']' 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:59.946 14:47:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:00.205 [2024-11-04 14:47:59.117624] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:22:00.205 [2024-11-04 14:47:59.117806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90174 ] 00:22:00.205 [2024-11-04 14:47:59.311732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:00.463 [2024-11-04 14:47:59.470222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.463 [2024-11-04 14:47:59.470240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.399 14:48:00 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:01.399 14:48:00 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:22:01.399 14:48:00 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:22:01.399 14:48:00 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.399 14:48:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:01.399 14:48:00 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:22:01.399 14:48:00 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.399 14:48:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:01.399 14:48:00 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:01.399 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:01.399 ' 00:22:03.300 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:22:03.300 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:22:03.300 14:48:02 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:22:03.300 14:48:02 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.300 14:48:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:03.300 14:48:02 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:22:03.300 14:48:02 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.300 14:48:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:03.300 14:48:02 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:22:03.300 ' 00:22:04.235 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:22:04.495 14:48:03 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:22:04.495 14:48:03 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:04.495 14:48:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:04.495 14:48:03 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:22:04.495 14:48:03 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:04.495 14:48:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:04.495 14:48:03 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:22:04.495 14:48:03 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:22:05.074 14:48:03 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:22:05.074 14:48:04 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:22:05.074 14:48:04 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:22:05.074 14:48:04 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.074 14:48:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:05.074 14:48:04 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:22:05.074 14:48:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.074 14:48:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:05.074 14:48:04 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:22:05.074 ' 00:22:06.450 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:22:06.450 14:48:05 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:22:06.450 14:48:05 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:06.450 14:48:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.450 14:48:05 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:22:06.450 14:48:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.450 14:48:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.450 14:48:05 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:22:06.450 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:22:06.450 ' 00:22:07.831 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:22:07.831 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:22:07.831 14:48:06 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:22:07.831 14:48:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:07.831 14:48:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:07.831 14:48:06 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90174 00:22:07.832 14:48:06 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90174 ']' 00:22:07.832 14:48:06 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90174 00:22:07.832 14:48:06 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:22:07.832 14:48:06 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:07.832 14:48:06 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90174 00:22:07.832 14:48:06 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:07.832 14:48:06 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:07.832 killing process with pid 90174 00:22:07.832 14:48:06 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90174' 00:22:07.832 14:48:06 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 90174 00:22:07.832 14:48:06 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 90174 00:22:10.377 14:48:09 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:22:10.377 14:48:09 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90174 ']' 00:22:10.377 14:48:09 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90174 00:22:10.377 14:48:09 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90174 ']' 00:22:10.377 14:48:09 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90174 00:22:10.377 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (90174) - No such process 00:22:10.377 Process with pid 90174 is not found 00:22:10.377 14:48:09 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 90174 is not found' 00:22:10.377 14:48:09 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:22:10.377 14:48:09 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:10.377 14:48:09 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:10.377 14:48:09 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:10.377 00:22:10.377 real 0m10.334s 00:22:10.377 user 0m21.341s 00:22:10.378 sys 0m1.197s 00:22:10.378 14:48:09 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:10.378 ************************************ 00:22:10.378 END TEST spdkcli_raid 00:22:10.378 ************************************ 00:22:10.378 14:48:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:22:10.378 14:48:09 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:10.378 14:48:09 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:10.378 14:48:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:10.378 14:48:09 -- common/autotest_common.sh@10 -- # set +x 00:22:10.378 ************************************ 00:22:10.378 START TEST blockdev_raid5f 00:22:10.378 ************************************ 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:22:10.378 * Looking for test storage... 00:22:10.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:10.378 14:48:09 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:10.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.378 --rc genhtml_branch_coverage=1 00:22:10.378 --rc genhtml_function_coverage=1 00:22:10.378 --rc genhtml_legend=1 00:22:10.378 --rc geninfo_all_blocks=1 00:22:10.378 --rc geninfo_unexecuted_blocks=1 00:22:10.378 00:22:10.378 ' 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:10.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.378 --rc genhtml_branch_coverage=1 00:22:10.378 --rc genhtml_function_coverage=1 00:22:10.378 --rc genhtml_legend=1 00:22:10.378 --rc geninfo_all_blocks=1 00:22:10.378 --rc geninfo_unexecuted_blocks=1 00:22:10.378 00:22:10.378 ' 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:10.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.378 --rc genhtml_branch_coverage=1 00:22:10.378 --rc genhtml_function_coverage=1 00:22:10.378 --rc genhtml_legend=1 00:22:10.378 --rc geninfo_all_blocks=1 00:22:10.378 --rc geninfo_unexecuted_blocks=1 00:22:10.378 00:22:10.378 ' 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:10.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:10.378 --rc genhtml_branch_coverage=1 00:22:10.378 --rc genhtml_function_coverage=1 00:22:10.378 --rc genhtml_legend=1 00:22:10.378 --rc geninfo_all_blocks=1 00:22:10.378 --rc geninfo_unexecuted_blocks=1 00:22:10.378 00:22:10.378 ' 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90453 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90453 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90453 ']' 00:22:10.378 14:48:09 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:10.378 14:48:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:10.378 [2024-11-04 14:48:09.456490] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:22:10.378 [2024-11-04 14:48:09.457270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90453 ] 00:22:10.638 [2024-11-04 14:48:09.636808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.896 [2024-11-04 14:48:09.772979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:22:11.863 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:22:11.863 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:22:11.863 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:11.863 Malloc0 00:22:11.863 Malloc1 00:22:11.863 Malloc2 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.863 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.863 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:22:11.863 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.863 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.863 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:11.863 14:48:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.863 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:22:11.864 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:11.864 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.864 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:22:11.864 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:22:11.864 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5b55c121-d4f7-4448-97be-d329a2dc19f1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5b55c121-d4f7-4448-97be-d329a2dc19f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5b55c121-d4f7-4448-97be-d329a2dc19f1",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1ab5da83-86aa-4c24-a41e-6f19ed2354ea",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ba88c7d3-778d-47d4-94ea-20c738a7cbdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "815ebb3b-e7a1-474a-acde-354078229050",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:11.864 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:22:11.864 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:22:11.864 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:22:11.864 14:48:10 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90453 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90453 ']' 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90453 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90453 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:11.864 killing process with pid 90453 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90453' 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90453 00:22:11.864 14:48:10 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90453 00:22:14.395 14:48:13 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:14.395 14:48:13 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:14.395 14:48:13 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:14.395 14:48:13 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:14.395 14:48:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:14.395 ************************************ 00:22:14.395 START TEST bdev_hello_world 00:22:14.395 ************************************ 00:22:14.395 14:48:13 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:22:14.395 [2024-11-04 14:48:13.453482] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:22:14.395 [2024-11-04 14:48:13.453651] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90515 ] 00:22:14.654 [2024-11-04 14:48:13.626778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.654 [2024-11-04 14:48:13.748468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.220 [2024-11-04 14:48:14.270312] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:15.220 [2024-11-04 14:48:14.270380] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:22:15.220 [2024-11-04 14:48:14.270404] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:15.220 [2024-11-04 14:48:14.270963] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:15.220 [2024-11-04 14:48:14.271180] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:15.220 [2024-11-04 14:48:14.271226] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:15.220 [2024-11-04 14:48:14.271297] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:15.220 00:22:15.220 [2024-11-04 14:48:14.271326] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:16.608 00:22:16.608 real 0m2.227s 00:22:16.608 user 0m1.799s 00:22:16.608 sys 0m0.297s 00:22:16.608 14:48:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:16.608 14:48:15 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:16.608 ************************************ 00:22:16.608 END TEST bdev_hello_world 00:22:16.608 ************************************ 00:22:16.608 14:48:15 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:22:16.608 14:48:15 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:16.608 14:48:15 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:16.608 14:48:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:16.608 ************************************ 00:22:16.608 START TEST bdev_bounds 00:22:16.608 ************************************ 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90557 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90557' 00:22:16.608 Process bdevio pid: 90557 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90557 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90557 ']' 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:16.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:16.608 14:48:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:16.882 [2024-11-04 14:48:15.720331] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:22:16.882 [2024-11-04 14:48:15.720504] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90557 ] 00:22:16.882 [2024-11-04 14:48:15.900350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:17.139 [2024-11-04 14:48:16.036608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.139 [2024-11-04 14:48:16.036746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.139 [2024-11-04 14:48:16.036758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.705 14:48:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:17.705 14:48:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:22:17.705 14:48:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:22:17.705 I/O targets: 00:22:17.705 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:22:17.705 00:22:17.705 00:22:17.705 CUnit - A unit testing framework for C - Version 2.1-3 00:22:17.705 http://cunit.sourceforge.net/ 00:22:17.705 00:22:17.705 00:22:17.705 Suite: bdevio tests on: raid5f 00:22:17.705 Test: blockdev write read block ...passed 00:22:17.705 Test: blockdev write zeroes read block ...passed 00:22:17.963 Test: blockdev write zeroes read no split ...passed 00:22:17.963 Test: blockdev write zeroes read split ...passed 00:22:17.963 Test: blockdev write zeroes read split partial ...passed 00:22:17.963 Test: blockdev reset ...passed 00:22:17.963 Test: blockdev write read 8 blocks ...passed 00:22:17.963 Test: blockdev write read size > 128k ...passed 00:22:17.963 Test: blockdev write read invalid size ...passed 00:22:17.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:17.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:17.963 Test: blockdev write read max offset ...passed 00:22:17.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:17.963 Test: blockdev writev readv 8 blocks ...passed 00:22:17.963 Test: blockdev writev readv 30 x 1block ...passed 00:22:17.963 Test: blockdev writev readv block ...passed 00:22:17.963 Test: blockdev writev readv size > 128k ...passed 00:22:17.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:17.963 Test: blockdev comparev and writev ...passed 00:22:17.963 Test: blockdev nvme passthru rw ...passed 00:22:17.963 Test: blockdev nvme passthru vendor specific ...passed 00:22:17.963 Test: blockdev nvme admin passthru ...passed 00:22:17.963 Test: blockdev copy ...passed 00:22:17.963 00:22:17.963 Run Summary: Type Total Ran Passed Failed Inactive 00:22:17.963 suites 1 1 n/a 0 0 00:22:17.963 tests 23 23 23 0 0 00:22:17.963 asserts 130 130 130 0 n/a 00:22:17.963 00:22:17.963 Elapsed time = 0.551 seconds 00:22:17.963 0 00:22:17.963 14:48:17 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90557 00:22:17.963 14:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90557 ']' 00:22:17.963 14:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90557 00:22:17.963 14:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:22:17.963 14:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:17.963 14:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90557 00:22:18.221 14:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:18.221 14:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:18.221 killing process with pid 90557 00:22:18.221 14:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90557' 00:22:18.221 14:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90557 00:22:18.221 14:48:17 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90557 00:22:19.594 14:48:18 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:22:19.594 00:22:19.594 real 0m2.786s 00:22:19.594 user 0m6.951s 00:22:19.594 sys 0m0.422s 00:22:19.594 14:48:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:19.594 ************************************ 00:22:19.594 END TEST bdev_bounds 00:22:19.594 14:48:18 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:19.594 ************************************ 00:22:19.594 14:48:18 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:19.594 14:48:18 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:19.594 14:48:18 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:19.594 14:48:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:19.594 ************************************ 00:22:19.594 START TEST bdev_nbd 00:22:19.594 ************************************ 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90617 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90617 /var/tmp/spdk-nbd.sock 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90617 ']' 00:22:19.594 14:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:19.595 14:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:19.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:19.595 14:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:19.595 14:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:19.595 14:48:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:19.595 [2024-11-04 14:48:18.565193] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:22:19.595 [2024-11-04 14:48:18.565873] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.852 [2024-11-04 14:48:18.763578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.852 [2024-11-04 14:48:18.892712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:20.419 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:22:20.675 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:20.933 1+0 records in 00:22:20.933 1+0 records out 00:22:20.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346891 s, 11.8 MB/s 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:22:20.933 14:48:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:22:21.191 { 00:22:21.191 "nbd_device": "/dev/nbd0", 00:22:21.191 "bdev_name": "raid5f" 00:22:21.191 } 00:22:21.191 ]' 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:22:21.191 { 00:22:21.191 "nbd_device": "/dev/nbd0", 00:22:21.191 "bdev_name": "raid5f" 00:22:21.191 } 00:22:21.191 ]' 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.191 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:21.450 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:21.709 14:48:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:22:22.382 /dev/nbd0 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:22.382 1+0 records in 00:22:22.382 1+0 records out 00:22:22.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372347 s, 11.0 MB/s 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:22.382 { 00:22:22.382 "nbd_device": "/dev/nbd0", 00:22:22.382 "bdev_name": "raid5f" 00:22:22.382 } 00:22:22.382 ]' 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:22.382 { 00:22:22.382 "nbd_device": "/dev/nbd0", 00:22:22.382 "bdev_name": "raid5f" 00:22:22.382 } 00:22:22.382 ]' 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:22:22.382 256+0 records in 00:22:22.382 256+0 records out 00:22:22.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00891147 s, 118 MB/s 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:22.382 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:22.641 256+0 records in 00:22:22.641 256+0 records out 00:22:22.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0394108 s, 26.6 MB/s 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:22.641 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:22.898 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:22.898 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:22.898 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:22.898 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:22.898 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:22.898 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:22.898 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:22.899 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:22.899 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:22.899 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:22.899 14:48:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:22:23.156 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:22:23.414 malloc_lvol_verify 00:22:23.414 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:22:23.672 e7ad38c6-0f5b-4f81-8407-eb4ec8704db3 00:22:23.672 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:22:23.931 4087a881-59f9-48cf-a57d-f1150906f3ea 00:22:23.931 14:48:22 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:22:24.189 /dev/nbd0 00:22:24.189 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:22:24.189 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:22:24.189 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:22:24.189 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:22:24.189 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:22:24.189 mke2fs 1.47.0 (5-Feb-2023) 00:22:24.189 Discarding device blocks: 0/4096 done 00:22:24.189 Creating filesystem with 4096 1k blocks and 1024 inodes 00:22:24.189 00:22:24.189 Allocating group tables: 0/1 done 00:22:24.189 Writing inode tables: 0/1 done 00:22:24.447 Creating journal (1024 blocks): done 00:22:24.447 Writing superblocks and filesystem accounting information: 0/1 done 00:22:24.447 00:22:24.447 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:22:24.447 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:24.447 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:24.447 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:24.447 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:22:24.447 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:24.447 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90617 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90617 ']' 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90617 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90617 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:24.705 killing process with pid 90617 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90617' 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90617 00:22:24.705 14:48:23 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90617 00:22:26.079 14:48:24 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:22:26.079 00:22:26.079 real 0m6.508s 00:22:26.079 user 0m9.399s 00:22:26.079 sys 0m1.379s 00:22:26.079 14:48:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:26.079 14:48:24 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:22:26.079 ************************************ 00:22:26.079 END TEST bdev_nbd 00:22:26.079 ************************************ 00:22:26.079 14:48:25 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:22:26.079 14:48:25 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:22:26.079 14:48:25 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:22:26.079 14:48:25 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:22:26.079 14:48:25 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:26.079 14:48:25 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.079 14:48:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:26.079 ************************************ 00:22:26.079 START TEST bdev_fio 00:22:26.079 ************************************ 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:22:26.079 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:26.079 ************************************ 00:22:26.079 START TEST bdev_fio_rw_verify 00:22:26.079 ************************************ 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:26.079 14:48:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:22:26.338 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:22:26.338 fio-3.35 00:22:26.338 Starting 1 thread 00:22:38.539 00:22:38.539 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90826: Mon Nov 4 14:48:36 2024 00:22:38.539 read: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(338MiB/10001msec) 00:22:38.539 slat (usec): min=21, max=202, avg=28.11, stdev= 6.02 00:22:38.539 clat (usec): min=13, max=476, avg=183.32, stdev=68.34 00:22:38.539 lat (usec): min=39, max=503, avg=211.43, stdev=69.07 00:22:38.539 clat percentiles (usec): 00:22:38.539 | 50.000th=[ 184], 99.000th=[ 318], 99.900th=[ 359], 99.990th=[ 416], 00:22:38.539 | 99.999th=[ 478] 00:22:38.539 write: IOPS=9080, BW=35.5MiB/s (37.2MB/s)(350MiB/9875msec); 0 zone resets 00:22:38.539 slat (usec): min=10, max=227, avg=23.23, stdev= 6.41 00:22:38.539 clat (usec): min=73, max=813, avg=423.39, stdev=57.62 00:22:38.539 lat (usec): min=93, max=951, avg=446.63, stdev=58.91 00:22:38.539 clat percentiles (usec): 00:22:38.539 | 50.000th=[ 429], 99.000th=[ 545], 99.900th=[ 644], 99.990th=[ 742], 00:22:38.539 | 99.999th=[ 816] 00:22:38.539 bw ( KiB/s): min=32832, max=37640, per=98.46%, avg=35765.47, stdev=1246.49, samples=19 00:22:38.539 iops : min= 8208, max= 9410, avg=8941.37, stdev=311.62, samples=19 00:22:38.539 lat (usec) : 20=0.01%, 50=0.01%, 100=6.78%, 250=32.37%, 500=57.18% 00:22:38.539 lat (usec) : 750=3.67%, 1000=0.01% 00:22:38.539 cpu : usr=98.66%, sys=0.53%, ctx=71, majf=0, minf=7493 00:22:38.539 IO depths : 1=7.7%, 2=19.9%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:38.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.539 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:38.539 issued rwts: total=86535,89674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:38.539 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:38.539 00:22:38.539 Run status group 0 (all jobs): 00:22:38.539 READ: bw=33.8MiB/s (35.4MB/s), 33.8MiB/s-33.8MiB/s (35.4MB/s-35.4MB/s), io=338MiB (354MB), run=10001-10001msec 00:22:38.539 WRITE: bw=35.5MiB/s (37.2MB/s), 35.5MiB/s-35.5MiB/s (37.2MB/s-37.2MB/s), io=350MiB (367MB), run=9875-9875msec 00:22:38.798 ----------------------------------------------------- 00:22:38.798 Suppressions used: 00:22:38.798 count bytes template 00:22:38.798 1 7 /usr/src/fio/parse.c 00:22:38.798 462 44352 /usr/src/fio/iolog.c 00:22:38.798 1 8 libtcmalloc_minimal.so 00:22:38.798 1 904 libcrypto.so 00:22:38.798 ----------------------------------------------------- 00:22:38.798 00:22:38.798 00:22:38.798 real 0m12.745s 00:22:38.798 user 0m13.063s 00:22:38.798 sys 0m0.764s 00:22:38.798 14:48:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:38.798 14:48:37 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:22:38.798 ************************************ 00:22:38.798 END TEST bdev_fio_rw_verify 00:22:38.798 ************************************ 00:22:38.798 14:48:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "5b55c121-d4f7-4448-97be-d329a2dc19f1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5b55c121-d4f7-4448-97be-d329a2dc19f1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "5b55c121-d4f7-4448-97be-d329a2dc19f1",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1ab5da83-86aa-4c24-a41e-6f19ed2354ea",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ba88c7d3-778d-47d4-94ea-20c738a7cbdc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "815ebb3b-e7a1-474a-acde-354078229050",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:22:39.057 /home/vagrant/spdk_repo/spdk 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:22:39.057 00:22:39.057 real 0m12.970s 00:22:39.057 user 0m13.167s 00:22:39.057 sys 0m0.858s 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:39.057 14:48:37 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:22:39.057 ************************************ 00:22:39.057 END TEST bdev_fio 00:22:39.057 ************************************ 00:22:39.057 14:48:38 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:39.057 14:48:38 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:39.057 14:48:38 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:22:39.057 14:48:38 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:39.057 14:48:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:39.057 ************************************ 00:22:39.057 START TEST bdev_verify 00:22:39.057 ************************************ 00:22:39.058 14:48:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:22:39.058 [2024-11-04 14:48:38.124873] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:22:39.058 [2024-11-04 14:48:38.125075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90986 ] 00:22:39.316 [2024-11-04 14:48:38.300332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:39.316 [2024-11-04 14:48:38.422406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.316 [2024-11-04 14:48:38.422430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.902 Running I/O for 5 seconds... 00:22:42.265 13782.00 IOPS, 53.84 MiB/s [2024-11-04T14:48:42.323Z] 13606.50 IOPS, 53.15 MiB/s [2024-11-04T14:48:43.266Z] 13895.00 IOPS, 54.28 MiB/s [2024-11-04T14:48:44.202Z] 13413.75 IOPS, 52.40 MiB/s [2024-11-04T14:48:44.202Z] 13514.00 IOPS, 52.79 MiB/s 00:22:45.079 Latency(us) 00:22:45.079 [2024-11-04T14:48:44.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.079 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:45.079 Verification LBA range: start 0x0 length 0x2000 00:22:45.079 raid5f : 5.01 6837.53 26.71 0.00 0.00 28130.76 281.13 26333.56 00:22:45.079 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:45.080 Verification LBA range: start 0x2000 length 0x2000 00:22:45.080 raid5f : 5.02 6690.15 26.13 0.00 0.00 28724.62 229.00 27763.43 00:22:45.080 [2024-11-04T14:48:44.203Z] =================================================================================================================== 00:22:45.080 [2024-11-04T14:48:44.203Z] Total : 13527.68 52.84 0.00 0.00 28424.72 229.00 27763.43 00:22:46.456 00:22:46.456 real 0m7.130s 00:22:46.456 user 0m13.119s 00:22:46.456 sys 0m0.322s 00:22:46.456 14:48:45 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:46.456 14:48:45 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:46.456 ************************************ 00:22:46.456 END TEST bdev_verify 00:22:46.456 ************************************ 00:22:46.456 14:48:45 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:46.456 14:48:45 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:22:46.456 14:48:45 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:46.456 14:48:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:46.456 ************************************ 00:22:46.456 START TEST bdev_verify_big_io 00:22:46.456 ************************************ 00:22:46.456 14:48:45 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:46.456 [2024-11-04 14:48:45.306847] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:22:46.456 [2024-11-04 14:48:45.307032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91079 ] 00:22:46.456 [2024-11-04 14:48:45.475114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:46.722 [2024-11-04 14:48:45.588494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.722 [2024-11-04 14:48:45.588502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.302 Running I/O for 5 seconds... 00:22:49.179 630.00 IOPS, 39.38 MiB/s [2024-11-04T14:48:49.236Z] 761.00 IOPS, 47.56 MiB/s [2024-11-04T14:48:50.611Z] 761.33 IOPS, 47.58 MiB/s [2024-11-04T14:48:51.548Z] 761.50 IOPS, 47.59 MiB/s [2024-11-04T14:48:51.548Z] 786.40 IOPS, 49.15 MiB/s 00:22:52.425 Latency(us) 00:22:52.425 [2024-11-04T14:48:51.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.425 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:52.425 Verification LBA range: start 0x0 length 0x200 00:22:52.425 raid5f : 5.20 402.75 25.17 0.00 0.00 7749241.11 175.01 350796.33 00:22:52.425 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:52.425 Verification LBA range: start 0x200 length 0x200 00:22:52.425 raid5f : 5.22 400.97 25.06 0.00 0.00 7829330.85 184.32 354609.34 00:22:52.425 [2024-11-04T14:48:51.548Z] =================================================================================================================== 00:22:52.425 [2024-11-04T14:48:51.548Z] Total : 803.73 50.23 0.00 0.00 7789285.98 175.01 354609.34 00:22:53.800 00:22:53.800 real 0m7.372s 00:22:53.800 user 0m13.636s 00:22:53.800 sys 0m0.294s 00:22:53.800 14:48:52 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:53.800 14:48:52 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:53.800 ************************************ 00:22:53.800 END TEST bdev_verify_big_io 00:22:53.800 ************************************ 00:22:53.801 14:48:52 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:53.801 14:48:52 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:22:53.801 14:48:52 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:53.801 14:48:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:53.801 ************************************ 00:22:53.801 START TEST bdev_write_zeroes 00:22:53.801 ************************************ 00:22:53.801 14:48:52 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:53.801 [2024-11-04 14:48:52.731052] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:22:53.801 [2024-11-04 14:48:52.731204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91172 ] 00:22:53.801 [2024-11-04 14:48:52.898459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.059 [2024-11-04 14:48:53.009581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.625 Running I/O for 1 seconds... 00:22:55.560 22047.00 IOPS, 86.12 MiB/s 00:22:55.560 Latency(us) 00:22:55.560 [2024-11-04T14:48:54.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.560 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:55.560 raid5f : 1.01 22012.26 85.99 0.00 0.00 5792.37 1921.40 8519.68 00:22:55.560 [2024-11-04T14:48:54.683Z] =================================================================================================================== 00:22:55.560 [2024-11-04T14:48:54.683Z] Total : 22012.26 85.99 0.00 0.00 5792.37 1921.40 8519.68 00:22:56.935 00:22:56.935 real 0m3.055s 00:22:56.935 user 0m2.647s 00:22:56.935 sys 0m0.279s 00:22:56.936 14:48:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:56.936 14:48:55 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:56.936 ************************************ 00:22:56.936 END TEST bdev_write_zeroes 00:22:56.936 ************************************ 00:22:56.936 14:48:55 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:56.936 14:48:55 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:22:56.936 14:48:55 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:56.936 14:48:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:56.936 ************************************ 00:22:56.936 START TEST bdev_json_nonenclosed 00:22:56.936 ************************************ 00:22:56.936 14:48:55 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:56.936 [2024-11-04 14:48:55.867841] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:22:56.936 [2024-11-04 14:48:55.868046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91224 ] 00:22:56.936 [2024-11-04 14:48:56.049288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.194 [2024-11-04 14:48:56.168809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.194 [2024-11-04 14:48:56.168914] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:57.194 [2024-11-04 14:48:56.168993] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:57.194 [2024-11-04 14:48:56.169009] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:57.452 00:22:57.452 real 0m0.658s 00:22:57.452 user 0m0.417s 00:22:57.452 sys 0m0.136s 00:22:57.452 14:48:56 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:57.452 14:48:56 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:57.452 ************************************ 00:22:57.452 END TEST bdev_json_nonenclosed 00:22:57.452 ************************************ 00:22:57.452 14:48:56 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:57.452 14:48:56 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:22:57.452 14:48:56 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:57.452 14:48:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:57.452 ************************************ 00:22:57.452 START TEST bdev_json_nonarray 00:22:57.452 ************************************ 00:22:57.452 14:48:56 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:57.452 [2024-11-04 14:48:56.564552] Starting SPDK v25.01-pre git sha1 78b0a6b78 / DPDK 24.03.0 initialization... 00:22:57.452 [2024-11-04 14:48:56.564734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91251 ] 00:22:57.711 [2024-11-04 14:48:56.740352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.969 [2024-11-04 14:48:56.870312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.969 [2024-11-04 14:48:56.870465] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:57.969 [2024-11-04 14:48:56.870494] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:57.969 [2024-11-04 14:48:56.870519] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:58.228 ************************************ 00:22:58.228 END TEST bdev_json_nonarray 00:22:58.228 ************************************ 00:22:58.228 00:22:58.228 real 0m0.660s 00:22:58.228 user 0m0.418s 00:22:58.228 sys 0m0.137s 00:22:58.228 14:48:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:58.229 14:48:57 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:22:58.229 14:48:57 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:22:58.229 00:22:58.229 real 0m48.027s 00:22:58.229 user 1m5.958s 00:22:58.229 sys 0m5.067s 00:22:58.229 14:48:57 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:58.229 ************************************ 00:22:58.229 END TEST blockdev_raid5f 00:22:58.229 ************************************ 00:22:58.229 14:48:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:58.229 14:48:57 -- spdk/autotest.sh@194 -- # uname -s 00:22:58.229 14:48:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:58.229 14:48:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:58.229 14:48:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:58.229 14:48:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@256 -- # timing_exit lib 00:22:58.229 14:48:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:58.229 14:48:57 -- common/autotest_common.sh@10 -- # set +x 00:22:58.229 14:48:57 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:58.229 14:48:57 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:58.229 14:48:57 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:58.229 14:48:57 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:58.229 14:48:57 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:58.229 14:48:57 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:58.229 14:48:57 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:58.229 14:48:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:58.229 14:48:57 -- common/autotest_common.sh@10 -- # set +x 00:22:58.229 14:48:57 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:58.229 14:48:57 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:22:58.229 14:48:57 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:22:58.229 14:48:57 -- common/autotest_common.sh@10 -- # set +x 00:23:00.133 INFO: APP EXITING 00:23:00.133 INFO: killing all VMs 00:23:00.133 INFO: killing vhost app 00:23:00.133 INFO: EXIT DONE 00:23:00.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:00.392 Waiting for block devices as requested 00:23:00.392 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:00.392 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:01.327 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:01.327 Cleaning 00:23:01.327 Removing: /var/run/dpdk/spdk0/config 00:23:01.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:01.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:01.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:01.327 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:01.327 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:01.327 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:01.327 Removing: /dev/shm/spdk_tgt_trace.pid56831 00:23:01.327 Removing: /var/run/dpdk/spdk0 00:23:01.327 Removing: /var/run/dpdk/spdk_pid56596 00:23:01.327 Removing: /var/run/dpdk/spdk_pid56831 00:23:01.327 Removing: /var/run/dpdk/spdk_pid57060 00:23:01.327 Removing: /var/run/dpdk/spdk_pid57164 00:23:01.327 Removing: /var/run/dpdk/spdk_pid57220 00:23:01.327 Removing: /var/run/dpdk/spdk_pid57348 00:23:01.327 Removing: /var/run/dpdk/spdk_pid57371 00:23:01.327 Removing: /var/run/dpdk/spdk_pid57576 00:23:01.327 Removing: /var/run/dpdk/spdk_pid57693 00:23:01.327 Removing: /var/run/dpdk/spdk_pid57800 00:23:01.327 Removing: /var/run/dpdk/spdk_pid57922 00:23:01.327 Removing: /var/run/dpdk/spdk_pid58025 00:23:01.327 Removing: /var/run/dpdk/spdk_pid58064 00:23:01.327 Removing: /var/run/dpdk/spdk_pid58106 00:23:01.327 Removing: /var/run/dpdk/spdk_pid58177 00:23:01.327 Removing: /var/run/dpdk/spdk_pid58288 00:23:01.327 Removing: /var/run/dpdk/spdk_pid58767 00:23:01.327 Removing: /var/run/dpdk/spdk_pid58840 00:23:01.327 Removing: /var/run/dpdk/spdk_pid58914 00:23:01.327 Removing: /var/run/dpdk/spdk_pid58941 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59092 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59108 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59262 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59278 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59353 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59371 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59435 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59464 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59661 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59698 00:23:01.327 Removing: /var/run/dpdk/spdk_pid59781 00:23:01.327 Removing: /var/run/dpdk/spdk_pid61157 00:23:01.327 Removing: /var/run/dpdk/spdk_pid61369 00:23:01.327 Removing: /var/run/dpdk/spdk_pid61520 00:23:01.327 Removing: /var/run/dpdk/spdk_pid62169 00:23:01.327 Removing: /var/run/dpdk/spdk_pid62386 00:23:01.327 Removing: /var/run/dpdk/spdk_pid62532 00:23:01.327 Removing: /var/run/dpdk/spdk_pid63186 00:23:01.327 Removing: /var/run/dpdk/spdk_pid63522 00:23:01.327 Removing: /var/run/dpdk/spdk_pid63662 00:23:01.327 Removing: /var/run/dpdk/spdk_pid65074 00:23:01.327 Removing: /var/run/dpdk/spdk_pid65334 00:23:01.327 Removing: /var/run/dpdk/spdk_pid65475 00:23:01.327 Removing: /var/run/dpdk/spdk_pid66887 00:23:01.327 Removing: /var/run/dpdk/spdk_pid67151 00:23:01.327 Removing: /var/run/dpdk/spdk_pid67292 00:23:01.327 Removing: /var/run/dpdk/spdk_pid68705 00:23:01.327 Removing: /var/run/dpdk/spdk_pid69162 00:23:01.327 Removing: /var/run/dpdk/spdk_pid69302 00:23:01.327 Removing: /var/run/dpdk/spdk_pid70819 00:23:01.327 Removing: /var/run/dpdk/spdk_pid71086 00:23:01.327 Removing: /var/run/dpdk/spdk_pid71238 00:23:01.327 Removing: /var/run/dpdk/spdk_pid72748 00:23:01.327 Removing: /var/run/dpdk/spdk_pid73017 00:23:01.327 Removing: /var/run/dpdk/spdk_pid73163 00:23:01.327 Removing: /var/run/dpdk/spdk_pid74682 00:23:01.327 Removing: /var/run/dpdk/spdk_pid75180 00:23:01.327 Removing: /var/run/dpdk/spdk_pid75326 00:23:01.327 Removing: /var/run/dpdk/spdk_pid75475 00:23:01.327 Removing: /var/run/dpdk/spdk_pid75921 00:23:01.327 Removing: /var/run/dpdk/spdk_pid76684 00:23:01.327 Removing: /var/run/dpdk/spdk_pid77068 00:23:01.327 Removing: /var/run/dpdk/spdk_pid77788 00:23:01.327 Removing: /var/run/dpdk/spdk_pid78272 00:23:01.327 Removing: /var/run/dpdk/spdk_pid79066 00:23:01.327 Removing: /var/run/dpdk/spdk_pid79486 00:23:01.327 Removing: /var/run/dpdk/spdk_pid81492 00:23:01.327 Removing: /var/run/dpdk/spdk_pid81945 00:23:01.327 Removing: /var/run/dpdk/spdk_pid82396 00:23:01.327 Removing: /var/run/dpdk/spdk_pid84516 00:23:01.327 Removing: /var/run/dpdk/spdk_pid85007 00:23:01.327 Removing: /var/run/dpdk/spdk_pid85519 00:23:01.327 Removing: /var/run/dpdk/spdk_pid86592 00:23:01.327 Removing: /var/run/dpdk/spdk_pid86922 00:23:01.327 Removing: /var/run/dpdk/spdk_pid87876 00:23:01.327 Removing: /var/run/dpdk/spdk_pid88206 00:23:01.327 Removing: /var/run/dpdk/spdk_pid89165 00:23:01.327 Removing: /var/run/dpdk/spdk_pid89489 00:23:01.586 Removing: /var/run/dpdk/spdk_pid90174 00:23:01.586 Removing: /var/run/dpdk/spdk_pid90453 00:23:01.586 Removing: /var/run/dpdk/spdk_pid90515 00:23:01.586 Removing: /var/run/dpdk/spdk_pid90557 00:23:01.586 Removing: /var/run/dpdk/spdk_pid90812 00:23:01.586 Removing: /var/run/dpdk/spdk_pid90986 00:23:01.586 Removing: /var/run/dpdk/spdk_pid91079 00:23:01.586 Removing: /var/run/dpdk/spdk_pid91172 00:23:01.586 Removing: /var/run/dpdk/spdk_pid91224 00:23:01.586 Removing: /var/run/dpdk/spdk_pid91251 00:23:01.586 Clean 00:23:01.586 14:49:00 -- common/autotest_common.sh@1451 -- # return 0 00:23:01.586 14:49:00 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:23:01.586 14:49:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.586 14:49:00 -- common/autotest_common.sh@10 -- # set +x 00:23:01.586 14:49:00 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:23:01.586 14:49:00 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.586 14:49:00 -- common/autotest_common.sh@10 -- # set +x 00:23:01.586 14:49:00 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:01.586 14:49:00 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:01.586 14:49:00 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:01.586 14:49:00 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:23:01.586 14:49:00 -- spdk/autotest.sh@394 -- # hostname 00:23:01.586 14:49:00 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:23:01.845 geninfo: WARNING: invalid characters removed from testname! 00:23:23.799 14:49:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:27.986 14:49:26 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:29.888 14:49:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:32.419 14:49:31 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:34.953 14:49:33 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:37.485 14:49:36 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:40.021 14:49:38 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:40.021 14:49:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:40.021 14:49:38 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:40.021 14:49:38 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:40.021 14:49:38 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:40.021 14:49:38 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:40.021 + [[ -n 5204 ]] 00:23:40.021 + sudo kill 5204 00:23:40.093 [Pipeline] } 00:23:40.107 [Pipeline] // timeout 00:23:40.112 [Pipeline] } 00:23:40.125 [Pipeline] // stage 00:23:40.129 [Pipeline] } 00:23:40.143 [Pipeline] // catchError 00:23:40.152 [Pipeline] stage 00:23:40.154 [Pipeline] { (Stop VM) 00:23:40.166 [Pipeline] sh 00:23:40.444 + vagrant halt 00:23:43.732 ==> default: Halting domain... 00:23:49.013 [Pipeline] sh 00:23:49.292 + vagrant destroy -f 00:23:52.633 ==> default: Removing domain... 00:23:52.645 [Pipeline] sh 00:23:52.925 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:23:52.934 [Pipeline] } 00:23:52.949 [Pipeline] // stage 00:23:52.954 [Pipeline] } 00:23:52.968 [Pipeline] // dir 00:23:52.973 [Pipeline] } 00:23:52.987 [Pipeline] // wrap 00:23:52.993 [Pipeline] } 00:23:53.005 [Pipeline] // catchError 00:23:53.013 [Pipeline] stage 00:23:53.015 [Pipeline] { (Epilogue) 00:23:53.027 [Pipeline] sh 00:23:53.309 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:58.588 [Pipeline] catchError 00:23:58.589 [Pipeline] { 00:23:58.601 [Pipeline] sh 00:23:58.880 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:58.880 Artifacts sizes are good 00:23:58.889 [Pipeline] } 00:23:58.902 [Pipeline] // catchError 00:23:58.912 [Pipeline] archiveArtifacts 00:23:58.918 Archiving artifacts 00:23:59.040 [Pipeline] cleanWs 00:23:59.051 [WS-CLEANUP] Deleting project workspace... 00:23:59.051 [WS-CLEANUP] Deferred wipeout is used... 00:23:59.058 [WS-CLEANUP] done 00:23:59.060 [Pipeline] } 00:23:59.075 [Pipeline] // stage 00:23:59.080 [Pipeline] } 00:23:59.093 [Pipeline] // node 00:23:59.099 [Pipeline] End of Pipeline 00:23:59.135 Finished: SUCCESS